Test Report: KVM_Linux_crio 21657

                    
                      666c3351e3298333ddd2e3f0587bd3e8ac91c0cd:2025-09-29:41679
                    
                

Test fail (13/324)

x
+
TestAddons/parallel/Ingress (491.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-911532 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-911532 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-911532 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [c16b0297-3ef5-4961-9f5e-0019acc5ea5f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-911532 -n addons-911532
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-09-29 10:32:03.509708556 +0000 UTC m=+747.575784340
addons_test.go:252: (dbg) Run:  kubectl --context addons-911532 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-911532 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-911532/192.168.39.179
Start Time:       Mon, 29 Sep 2025 10:24:03 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.29
IPs:
IP:  10.244.0.29
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j4bxx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-j4bxx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  8m                     default-scheduler  Successfully assigned default/nginx to addons-911532
Warning  Failed     2m36s (x3 over 6m25s)  kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    116s (x4 over 8m)      kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     85s (x4 over 6m25s)    kubelet            Error: ErrImagePull
Warning  Failed     85s                    kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    7s (x10 over 6m24s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     7s (x10 over 6m24s)    kubelet            Error: ImagePullBackOff
addons_test.go:252: (dbg) Run:  kubectl --context addons-911532 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-911532 logs nginx -n default: exit status 1 (68.124347ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-911532 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-911532 -n addons-911532
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-911532 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-911532 logs -n 25: (1.308850801s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ delete  │ -p download-only-452531                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-452531 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ delete  │ -p download-only-910458                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-910458 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ delete  │ -p download-only-452531                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-452531 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ start   │ --download-only -p binary-mirror-757361 --alsologtostderr --binary-mirror http://127.0.0.1:43621 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-757361 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │                     │
	│ delete  │ -p binary-mirror-757361                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-757361 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ addons  │ disable dashboard -p addons-911532                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │                     │
	│ addons  │ enable dashboard -p addons-911532                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │                     │
	│ start   │ -p addons-911532 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-911532 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-911532 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ enable headlamp -p addons-911532 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-911532 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-911532 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-911532 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ ip      │ addons-911532 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ addons  │ addons-911532 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ addons  │ addons-911532 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:24 UTC │
	│ addons  │ addons-911532 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │ 29 Sep 25 10:24 UTC │
	│ addons  │ addons-911532 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │ 29 Sep 25 10:24 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-911532                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │ 29 Sep 25 10:24 UTC │
	│ addons  │ addons-911532 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │ 29 Sep 25 10:24 UTC │
	│ addons  │ addons-911532 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:27 UTC │ 29 Sep 25 10:28 UTC │
	│ addons  │ addons-911532 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:29 UTC │ 29 Sep 25 10:29 UTC │
	│ addons  │ addons-911532 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:29 UTC │ 29 Sep 25 10:29 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:19:49
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:19:49.657940    8330 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:19:49.658280    8330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:19:49.658293    8330 out.go:374] Setting ErrFile to fd 2...
	I0929 10:19:49.658299    8330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:19:49.658774    8330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
	I0929 10:19:49.659724    8330 out.go:368] Setting JSON to false
	I0929 10:19:49.660569    8330 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":135,"bootTime":1759141055,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:19:49.660646    8330 start.go:140] virtualization: kvm guest
	I0929 10:19:49.662346    8330 out.go:179] * [addons-911532] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:19:49.663847    8330 notify.go:220] Checking for updates...
	I0929 10:19:49.663868    8330 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 10:19:49.665023    8330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:19:49.666170    8330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 10:19:49.667465    8330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 10:19:49.668605    8330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:19:49.669820    8330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:19:49.670997    8330 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:19:49.700388    8330 out.go:179] * Using the kvm2 driver based on user configuration
	I0929 10:19:49.701463    8330 start.go:304] selected driver: kvm2
	I0929 10:19:49.701479    8330 start.go:924] validating driver "kvm2" against <nil>
	I0929 10:19:49.701491    8330 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:19:49.702129    8330 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:19:49.702205    8330 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21657-3816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 10:19:49.715255    8330 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 10:19:49.715283    8330 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21657-3816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 10:19:49.729163    8330 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 10:19:49.729198    8330 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:19:49.729518    8330 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:19:49.729559    8330 cni.go:84] Creating CNI manager for ""
	I0929 10:19:49.729599    8330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 10:19:49.729607    8330 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 10:19:49.729659    8330 start.go:348] cluster config:
	{Name:addons-911532 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-911532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:19:49.729764    8330 iso.go:125] acquiring lock: {Name:mk6893cf08d5f5d64906f89556bbcb1c3b23df2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:19:49.731718    8330 out.go:179] * Starting "addons-911532" primary control-plane node in "addons-911532" cluster
	I0929 10:19:49.732842    8330 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:19:49.732885    8330 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 10:19:49.732892    8330 cache.go:58] Caching tarball of preloaded images
	I0929 10:19:49.732961    8330 preload.go:172] Found /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 10:19:49.732971    8330 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 10:19:49.733271    8330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/config.json ...
	I0929 10:19:49.733296    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/config.json: {Name:mk3b1c31f51191d700bb099fb8f771ac33c82a62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:19:49.733457    8330 start.go:360] acquireMachinesLock for addons-911532: {Name:mk5aa1ba007c5e25969fbfeac9bb0aa5318bfa89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0929 10:19:49.733506    8330 start.go:364] duration metric: took 34.004µs to acquireMachinesLock for "addons-911532"
	I0929 10:19:49.733524    8330 start.go:93] Provisioning new machine with config: &{Name:addons-911532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:addons-911532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 10:19:49.733580    8330 start.go:125] createHost starting for "" (driver="kvm2")
	I0929 10:19:49.735166    8330 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0929 10:19:49.735279    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:19:49.735315    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:19:49.747570    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34227
	I0929 10:19:49.748034    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:19:49.748606    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:19:49.748628    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:19:49.748980    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:19:49.749155    8330 main.go:141] libmachine: (addons-911532) Calling .GetMachineName
	I0929 10:19:49.749278    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:19:49.749427    8330 start.go:159] libmachine.API.Create for "addons-911532" (driver="kvm2")
	I0929 10:19:49.749454    8330 client.go:168] LocalClient.Create starting
	I0929 10:19:49.749497    8330 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem
	I0929 10:19:49.897019    8330 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem
	I0929 10:19:49.971089    8330 main.go:141] libmachine: Running pre-create checks...
	I0929 10:19:49.971109    8330 main.go:141] libmachine: (addons-911532) Calling .PreCreateCheck
	I0929 10:19:49.971568    8330 main.go:141] libmachine: (addons-911532) Calling .GetConfigRaw
	I0929 10:19:49.971999    8330 main.go:141] libmachine: Creating machine...
	I0929 10:19:49.972014    8330 main.go:141] libmachine: (addons-911532) Calling .Create
	I0929 10:19:49.972178    8330 main.go:141] libmachine: (addons-911532) creating domain...
	I0929 10:19:49.972189    8330 main.go:141] libmachine: (addons-911532) creating network...
	I0929 10:19:49.973497    8330 main.go:141] libmachine: (addons-911532) DBG | found existing default network
	I0929 10:19:49.973637    8330 main.go:141] libmachine: (addons-911532) DBG | <network>
	I0929 10:19:49.973653    8330 main.go:141] libmachine: (addons-911532) DBG |   <name>default</name>
	I0929 10:19:49.973661    8330 main.go:141] libmachine: (addons-911532) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0929 10:19:49.973670    8330 main.go:141] libmachine: (addons-911532) DBG |   <forward mode='nat'>
	I0929 10:19:49.973677    8330 main.go:141] libmachine: (addons-911532) DBG |     <nat>
	I0929 10:19:49.973688    8330 main.go:141] libmachine: (addons-911532) DBG |       <port start='1024' end='65535'/>
	I0929 10:19:49.973700    8330 main.go:141] libmachine: (addons-911532) DBG |     </nat>
	I0929 10:19:49.973706    8330 main.go:141] libmachine: (addons-911532) DBG |   </forward>
	I0929 10:19:49.973715    8330 main.go:141] libmachine: (addons-911532) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0929 10:19:49.973722    8330 main.go:141] libmachine: (addons-911532) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0929 10:19:49.973731    8330 main.go:141] libmachine: (addons-911532) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0929 10:19:49.973740    8330 main.go:141] libmachine: (addons-911532) DBG |     <dhcp>
	I0929 10:19:49.973749    8330 main.go:141] libmachine: (addons-911532) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0929 10:19:49.973765    8330 main.go:141] libmachine: (addons-911532) DBG |     </dhcp>
	I0929 10:19:49.973776    8330 main.go:141] libmachine: (addons-911532) DBG |   </ip>
	I0929 10:19:49.973780    8330 main.go:141] libmachine: (addons-911532) DBG | </network>
	I0929 10:19:49.973787    8330 main.go:141] libmachine: (addons-911532) DBG | 
	I0929 10:19:49.974334    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:49.974184    8358 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000200dd0}
	I0929 10:19:49.974373    8330 main.go:141] libmachine: (addons-911532) DBG | defining private network:
	I0929 10:19:49.974397    8330 main.go:141] libmachine: (addons-911532) DBG | 
	I0929 10:19:49.974420    8330 main.go:141] libmachine: (addons-911532) DBG | <network>
	I0929 10:19:49.974439    8330 main.go:141] libmachine: (addons-911532) DBG |   <name>mk-addons-911532</name>
	I0929 10:19:49.974466    8330 main.go:141] libmachine: (addons-911532) DBG |   <dns enable='no'/>
	I0929 10:19:49.974489    8330 main.go:141] libmachine: (addons-911532) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0929 10:19:49.974503    8330 main.go:141] libmachine: (addons-911532) DBG |     <dhcp>
	I0929 10:19:49.974515    8330 main.go:141] libmachine: (addons-911532) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0929 10:19:49.974525    8330 main.go:141] libmachine: (addons-911532) DBG |     </dhcp>
	I0929 10:19:49.974531    8330 main.go:141] libmachine: (addons-911532) DBG |   </ip>
	I0929 10:19:49.974536    8330 main.go:141] libmachine: (addons-911532) DBG | </network>
	I0929 10:19:49.974542    8330 main.go:141] libmachine: (addons-911532) DBG | 
	I0929 10:19:49.980371    8330 main.go:141] libmachine: (addons-911532) DBG | creating private network mk-addons-911532 192.168.39.0/24...
	I0929 10:19:50.045524    8330 main.go:141] libmachine: (addons-911532) DBG | private network mk-addons-911532 192.168.39.0/24 created
	I0929 10:19:50.045754    8330 main.go:141] libmachine: (addons-911532) DBG | <network>
	I0929 10:19:50.045775    8330 main.go:141] libmachine: (addons-911532) DBG |   <name>mk-addons-911532</name>
	I0929 10:19:50.045788    8330 main.go:141] libmachine: (addons-911532) setting up store path in /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532 ...
	I0929 10:19:50.045815    8330 main.go:141] libmachine: (addons-911532) DBG |   <uuid>1948f630-90e3-4c16-adbb-718b17efed7e</uuid>
	I0929 10:19:50.045832    8330 main.go:141] libmachine: (addons-911532) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I0929 10:19:50.045851    8330 main.go:141] libmachine: (addons-911532) building disk image from file:///home/jenkins/minikube-integration/21657-3816/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0929 10:19:50.045876    8330 main.go:141] libmachine: (addons-911532) DBG |   <mac address='52:54:00:30:e5:b4'/>
	I0929 10:19:50.045894    8330 main.go:141] libmachine: (addons-911532) DBG |   <dns enable='no'/>
	I0929 10:19:50.045921    8330 main.go:141] libmachine: (addons-911532) Downloading /home/jenkins/minikube-integration/21657-3816/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21657-3816/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I0929 10:19:50.045936    8330 main.go:141] libmachine: (addons-911532) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0929 10:19:50.045954    8330 main.go:141] libmachine: (addons-911532) DBG |     <dhcp>
	I0929 10:19:50.045966    8330 main.go:141] libmachine: (addons-911532) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0929 10:19:50.045976    8330 main.go:141] libmachine: (addons-911532) DBG |     </dhcp>
	I0929 10:19:50.045985    8330 main.go:141] libmachine: (addons-911532) DBG |   </ip>
	I0929 10:19:50.045994    8330 main.go:141] libmachine: (addons-911532) DBG | </network>
	I0929 10:19:50.046009    8330 main.go:141] libmachine: (addons-911532) DBG | 
	I0929 10:19:50.046032    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:50.045748    8358 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 10:19:50.297023    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:50.296839    8358 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa...
	I0929 10:19:50.440022    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:50.439881    8358 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/addons-911532.rawdisk...
	I0929 10:19:50.440071    8330 main.go:141] libmachine: (addons-911532) DBG | Writing magic tar header
	I0929 10:19:50.440088    8330 main.go:141] libmachine: (addons-911532) DBG | Writing SSH key tar header
	I0929 10:19:50.440542    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:50.440479    8358 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532 ...
	I0929 10:19:50.440591    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532
	I0929 10:19:50.440619    8330 main.go:141] libmachine: (addons-911532) setting executable bit set on /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532 (perms=drwx------)
	I0929 10:19:50.440632    8330 main.go:141] libmachine: (addons-911532) setting executable bit set on /home/jenkins/minikube-integration/21657-3816/.minikube/machines (perms=drwxr-xr-x)
	I0929 10:19:50.440640    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21657-3816/.minikube/machines
	I0929 10:19:50.440665    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 10:19:50.440675    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21657-3816
	I0929 10:19:50.440683    8330 main.go:141] libmachine: (addons-911532) setting executable bit set on /home/jenkins/minikube-integration/21657-3816/.minikube (perms=drwxr-xr-x)
	I0929 10:19:50.440696    8330 main.go:141] libmachine: (addons-911532) setting executable bit set on /home/jenkins/minikube-integration/21657-3816 (perms=drwxrwxr-x)
	I0929 10:19:50.440709    8330 main.go:141] libmachine: (addons-911532) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0929 10:19:50.440718    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0929 10:19:50.440730    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home/jenkins
	I0929 10:19:50.440740    8330 main.go:141] libmachine: (addons-911532) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0929 10:19:50.440750    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home
	I0929 10:19:50.440759    8330 main.go:141] libmachine: (addons-911532) DBG | skipping /home - not owner
	I0929 10:19:50.440766    8330 main.go:141] libmachine: (addons-911532) defining domain...
	I0929 10:19:50.441750    8330 main.go:141] libmachine: (addons-911532) defining domain using XML: 
	I0929 10:19:50.441770    8330 main.go:141] libmachine: (addons-911532) <domain type='kvm'>
	I0929 10:19:50.441785    8330 main.go:141] libmachine: (addons-911532)   <name>addons-911532</name>
	I0929 10:19:50.441795    8330 main.go:141] libmachine: (addons-911532)   <memory unit='MiB'>4096</memory>
	I0929 10:19:50.441807    8330 main.go:141] libmachine: (addons-911532)   <vcpu>2</vcpu>
	I0929 10:19:50.441815    8330 main.go:141] libmachine: (addons-911532)   <features>
	I0929 10:19:50.441823    8330 main.go:141] libmachine: (addons-911532)     <acpi/>
	I0929 10:19:50.441831    8330 main.go:141] libmachine: (addons-911532)     <apic/>
	I0929 10:19:50.441838    8330 main.go:141] libmachine: (addons-911532)     <pae/>
	I0929 10:19:50.441843    8330 main.go:141] libmachine: (addons-911532)   </features>
	I0929 10:19:50.441851    8330 main.go:141] libmachine: (addons-911532)   <cpu mode='host-passthrough'>
	I0929 10:19:50.441858    8330 main.go:141] libmachine: (addons-911532)   </cpu>
	I0929 10:19:50.441866    8330 main.go:141] libmachine: (addons-911532)   <os>
	I0929 10:19:50.441873    8330 main.go:141] libmachine: (addons-911532)     <type>hvm</type>
	I0929 10:19:50.441881    8330 main.go:141] libmachine: (addons-911532)     <boot dev='cdrom'/>
	I0929 10:19:50.441885    8330 main.go:141] libmachine: (addons-911532)     <boot dev='hd'/>
	I0929 10:19:50.441892    8330 main.go:141] libmachine: (addons-911532)     <bootmenu enable='no'/>
	I0929 10:19:50.441896    8330 main.go:141] libmachine: (addons-911532)   </os>
	I0929 10:19:50.441903    8330 main.go:141] libmachine: (addons-911532)   <devices>
	I0929 10:19:50.441907    8330 main.go:141] libmachine: (addons-911532)     <disk type='file' device='cdrom'>
	I0929 10:19:50.441927    8330 main.go:141] libmachine: (addons-911532)       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/boot2docker.iso'/>
	I0929 10:19:50.441934    8330 main.go:141] libmachine: (addons-911532)       <target dev='hdc' bus='scsi'/>
	I0929 10:19:50.441939    8330 main.go:141] libmachine: (addons-911532)       <readonly/>
	I0929 10:19:50.441943    8330 main.go:141] libmachine: (addons-911532)     </disk>
	I0929 10:19:50.441951    8330 main.go:141] libmachine: (addons-911532)     <disk type='file' device='disk'>
	I0929 10:19:50.441959    8330 main.go:141] libmachine: (addons-911532)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0929 10:19:50.441966    8330 main.go:141] libmachine: (addons-911532)       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/addons-911532.rawdisk'/>
	I0929 10:19:50.441973    8330 main.go:141] libmachine: (addons-911532)       <target dev='hda' bus='virtio'/>
	I0929 10:19:50.441978    8330 main.go:141] libmachine: (addons-911532)     </disk>
	I0929 10:19:50.441990    8330 main.go:141] libmachine: (addons-911532)     <interface type='network'>
	I0929 10:19:50.441998    8330 main.go:141] libmachine: (addons-911532)       <source network='mk-addons-911532'/>
	I0929 10:19:50.442004    8330 main.go:141] libmachine: (addons-911532)       <model type='virtio'/>
	I0929 10:19:50.442009    8330 main.go:141] libmachine: (addons-911532)     </interface>
	I0929 10:19:50.442016    8330 main.go:141] libmachine: (addons-911532)     <interface type='network'>
	I0929 10:19:50.442022    8330 main.go:141] libmachine: (addons-911532)       <source network='default'/>
	I0929 10:19:50.442028    8330 main.go:141] libmachine: (addons-911532)       <model type='virtio'/>
	I0929 10:19:50.442033    8330 main.go:141] libmachine: (addons-911532)     </interface>
	I0929 10:19:50.442039    8330 main.go:141] libmachine: (addons-911532)     <serial type='pty'>
	I0929 10:19:50.442044    8330 main.go:141] libmachine: (addons-911532)       <target port='0'/>
	I0929 10:19:50.442050    8330 main.go:141] libmachine: (addons-911532)     </serial>
	I0929 10:19:50.442055    8330 main.go:141] libmachine: (addons-911532)     <console type='pty'>
	I0929 10:19:50.442067    8330 main.go:141] libmachine: (addons-911532)       <target type='serial' port='0'/>
	I0929 10:19:50.442072    8330 main.go:141] libmachine: (addons-911532)     </console>
	I0929 10:19:50.442078    8330 main.go:141] libmachine: (addons-911532)     <rng model='virtio'>
	I0929 10:19:50.442084    8330 main.go:141] libmachine: (addons-911532)       <backend model='random'>/dev/random</backend>
	I0929 10:19:50.442090    8330 main.go:141] libmachine: (addons-911532)     </rng>
	I0929 10:19:50.442094    8330 main.go:141] libmachine: (addons-911532)   </devices>
	I0929 10:19:50.442100    8330 main.go:141] libmachine: (addons-911532) </domain>
	I0929 10:19:50.442106    8330 main.go:141] libmachine: (addons-911532) 
	I0929 10:19:50.449537    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:be:29:87 in network default
	I0929 10:19:50.449973    8330 main.go:141] libmachine: (addons-911532) starting domain...
	I0929 10:19:50.449986    8330 main.go:141] libmachine: (addons-911532) ensuring networks are active...
	I0929 10:19:50.450009    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:50.450701    8330 main.go:141] libmachine: (addons-911532) Ensuring network default is active
	I0929 10:19:50.451007    8330 main.go:141] libmachine: (addons-911532) Ensuring network mk-addons-911532 is active
	I0929 10:19:50.451538    8330 main.go:141] libmachine: (addons-911532) getting domain XML...
	I0929 10:19:50.452379    8330 main.go:141] libmachine: (addons-911532) DBG | starting domain XML:
	I0929 10:19:50.452399    8330 main.go:141] libmachine: (addons-911532) DBG | <domain type='kvm'>
	I0929 10:19:50.452408    8330 main.go:141] libmachine: (addons-911532) DBG |   <name>addons-911532</name>
	I0929 10:19:50.452415    8330 main.go:141] libmachine: (addons-911532) DBG |   <uuid>0c8a2bbd-7687-4c1a-8020-738f402773b8</uuid>
	I0929 10:19:50.452446    8330 main.go:141] libmachine: (addons-911532) DBG |   <memory unit='KiB'>4194304</memory>
	I0929 10:19:50.452469    8330 main.go:141] libmachine: (addons-911532) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I0929 10:19:50.452483    8330 main.go:141] libmachine: (addons-911532) DBG |   <vcpu placement='static'>2</vcpu>
	I0929 10:19:50.452491    8330 main.go:141] libmachine: (addons-911532) DBG |   <os>
	I0929 10:19:50.452498    8330 main.go:141] libmachine: (addons-911532) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0929 10:19:50.452505    8330 main.go:141] libmachine: (addons-911532) DBG |     <boot dev='cdrom'/>
	I0929 10:19:50.452514    8330 main.go:141] libmachine: (addons-911532) DBG |     <boot dev='hd'/>
	I0929 10:19:50.452525    8330 main.go:141] libmachine: (addons-911532) DBG |     <bootmenu enable='no'/>
	I0929 10:19:50.452545    8330 main.go:141] libmachine: (addons-911532) DBG |   </os>
	I0929 10:19:50.452558    8330 main.go:141] libmachine: (addons-911532) DBG |   <features>
	I0929 10:19:50.452564    8330 main.go:141] libmachine: (addons-911532) DBG |     <acpi/>
	I0929 10:19:50.452573    8330 main.go:141] libmachine: (addons-911532) DBG |     <apic/>
	I0929 10:19:50.452589    8330 main.go:141] libmachine: (addons-911532) DBG |     <pae/>
	I0929 10:19:50.452598    8330 main.go:141] libmachine: (addons-911532) DBG |   </features>
	I0929 10:19:50.452605    8330 main.go:141] libmachine: (addons-911532) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0929 10:19:50.452612    8330 main.go:141] libmachine: (addons-911532) DBG |   <clock offset='utc'/>
	I0929 10:19:50.452628    8330 main.go:141] libmachine: (addons-911532) DBG |   <on_poweroff>destroy</on_poweroff>
	I0929 10:19:50.452639    8330 main.go:141] libmachine: (addons-911532) DBG |   <on_reboot>restart</on_reboot>
	I0929 10:19:50.452649    8330 main.go:141] libmachine: (addons-911532) DBG |   <on_crash>destroy</on_crash>
	I0929 10:19:50.452658    8330 main.go:141] libmachine: (addons-911532) DBG |   <devices>
	I0929 10:19:50.452665    8330 main.go:141] libmachine: (addons-911532) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0929 10:19:50.452674    8330 main.go:141] libmachine: (addons-911532) DBG |     <disk type='file' device='cdrom'>
	I0929 10:19:50.452680    8330 main.go:141] libmachine: (addons-911532) DBG |       <driver name='qemu' type='raw'/>
	I0929 10:19:50.452692    8330 main.go:141] libmachine: (addons-911532) DBG |       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/boot2docker.iso'/>
	I0929 10:19:50.452710    8330 main.go:141] libmachine: (addons-911532) DBG |       <target dev='hdc' bus='scsi'/>
	I0929 10:19:50.452726    8330 main.go:141] libmachine: (addons-911532) DBG |       <readonly/>
	I0929 10:19:50.452740    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0929 10:19:50.452748    8330 main.go:141] libmachine: (addons-911532) DBG |     </disk>
	I0929 10:19:50.452760    8330 main.go:141] libmachine: (addons-911532) DBG |     <disk type='file' device='disk'>
	I0929 10:19:50.452768    8330 main.go:141] libmachine: (addons-911532) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0929 10:19:50.452781    8330 main.go:141] libmachine: (addons-911532) DBG |       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/addons-911532.rawdisk'/>
	I0929 10:19:50.452797    8330 main.go:141] libmachine: (addons-911532) DBG |       <target dev='hda' bus='virtio'/>
	I0929 10:19:50.452811    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0929 10:19:50.452820    8330 main.go:141] libmachine: (addons-911532) DBG |     </disk>
	I0929 10:19:50.452832    8330 main.go:141] libmachine: (addons-911532) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0929 10:19:50.452844    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0929 10:19:50.452853    8330 main.go:141] libmachine: (addons-911532) DBG |     </controller>
	I0929 10:19:50.452868    8330 main.go:141] libmachine: (addons-911532) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0929 10:19:50.452882    8330 main.go:141] libmachine: (addons-911532) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0929 10:19:50.452894    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0929 10:19:50.452905    8330 main.go:141] libmachine: (addons-911532) DBG |     </controller>
	I0929 10:19:50.452917    8330 main.go:141] libmachine: (addons-911532) DBG |     <interface type='network'>
	I0929 10:19:50.452928    8330 main.go:141] libmachine: (addons-911532) DBG |       <mac address='52:54:00:96:11:56'/>
	I0929 10:19:50.452937    8330 main.go:141] libmachine: (addons-911532) DBG |       <source network='mk-addons-911532'/>
	I0929 10:19:50.452945    8330 main.go:141] libmachine: (addons-911532) DBG |       <model type='virtio'/>
	I0929 10:19:50.452955    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0929 10:19:50.452975    8330 main.go:141] libmachine: (addons-911532) DBG |     </interface>
	I0929 10:19:50.452983    8330 main.go:141] libmachine: (addons-911532) DBG |     <interface type='network'>
	I0929 10:19:50.452999    8330 main.go:141] libmachine: (addons-911532) DBG |       <mac address='52:54:00:be:29:87'/>
	I0929 10:19:50.453014    8330 main.go:141] libmachine: (addons-911532) DBG |       <source network='default'/>
	I0929 10:19:50.453022    8330 main.go:141] libmachine: (addons-911532) DBG |       <model type='virtio'/>
	I0929 10:19:50.453031    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0929 10:19:50.453042    8330 main.go:141] libmachine: (addons-911532) DBG |     </interface>
	I0929 10:19:50.453053    8330 main.go:141] libmachine: (addons-911532) DBG |     <serial type='pty'>
	I0929 10:19:50.453062    8330 main.go:141] libmachine: (addons-911532) DBG |       <target type='isa-serial' port='0'>
	I0929 10:19:50.453073    8330 main.go:141] libmachine: (addons-911532) DBG |         <model name='isa-serial'/>
	I0929 10:19:50.453081    8330 main.go:141] libmachine: (addons-911532) DBG |       </target>
	I0929 10:19:50.453088    8330 main.go:141] libmachine: (addons-911532) DBG |     </serial>
	I0929 10:19:50.453094    8330 main.go:141] libmachine: (addons-911532) DBG |     <console type='pty'>
	I0929 10:19:50.453106    8330 main.go:141] libmachine: (addons-911532) DBG |       <target type='serial' port='0'/>
	I0929 10:19:50.453114    8330 main.go:141] libmachine: (addons-911532) DBG |     </console>
	I0929 10:19:50.453119    8330 main.go:141] libmachine: (addons-911532) DBG |     <input type='mouse' bus='ps2'/>
	I0929 10:19:50.453131    8330 main.go:141] libmachine: (addons-911532) DBG |     <input type='keyboard' bus='ps2'/>
	I0929 10:19:50.453138    8330 main.go:141] libmachine: (addons-911532) DBG |     <audio id='1' type='none'/>
	I0929 10:19:50.453144    8330 main.go:141] libmachine: (addons-911532) DBG |     <memballoon model='virtio'>
	I0929 10:19:50.453153    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0929 10:19:50.453158    8330 main.go:141] libmachine: (addons-911532) DBG |     </memballoon>
	I0929 10:19:50.453162    8330 main.go:141] libmachine: (addons-911532) DBG |     <rng model='virtio'>
	I0929 10:19:50.453170    8330 main.go:141] libmachine: (addons-911532) DBG |       <backend model='random'>/dev/random</backend>
	I0929 10:19:50.453176    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0929 10:19:50.453193    8330 main.go:141] libmachine: (addons-911532) DBG |     </rng>
	I0929 10:19:50.453213    8330 main.go:141] libmachine: (addons-911532) DBG |   </devices>
	I0929 10:19:50.453227    8330 main.go:141] libmachine: (addons-911532) DBG | </domain>
	I0929 10:19:50.453239    8330 main.go:141] libmachine: (addons-911532) DBG | 
	I0929 10:19:51.804030    8330 main.go:141] libmachine: (addons-911532) waiting for domain to start...
	I0929 10:19:51.805192    8330 main.go:141] libmachine: (addons-911532) domain is now running
	I0929 10:19:51.805217    8330 main.go:141] libmachine: (addons-911532) waiting for IP...
	I0929 10:19:51.805985    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:51.806446    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:51.806469    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:51.806682    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:51.806731    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:51.806690    8358 retry.go:31] will retry after 261.427598ms: waiting for domain to come up
	I0929 10:19:52.070280    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:52.070742    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:52.070767    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:52.070971    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:52.070993    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:52.070958    8358 retry.go:31] will retry after 240.955253ms: waiting for domain to come up
	I0929 10:19:52.313494    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:52.313944    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:52.313967    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:52.314221    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:52.314248    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:52.314183    8358 retry.go:31] will retry after 448.127739ms: waiting for domain to come up
	I0929 10:19:52.763659    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:52.764289    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:52.764319    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:52.764571    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:52.764611    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:52.764572    8358 retry.go:31] will retry after 440.800517ms: waiting for domain to come up
	I0929 10:19:53.207391    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:53.207852    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:53.207875    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:53.208100    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:53.208135    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:53.208089    8358 retry.go:31] will retry after 608.456206ms: waiting for domain to come up
	I0929 10:19:53.817995    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:53.818510    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:53.818534    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:53.818802    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:53.818825    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:53.818782    8358 retry.go:31] will retry after 587.200151ms: waiting for domain to come up
	I0929 10:19:54.407631    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:54.408171    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:54.408193    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:54.408543    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:54.408576    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:54.408497    8358 retry.go:31] will retry after 1.130343319s: waiting for domain to come up
	I0929 10:19:55.540378    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:55.540927    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:55.540953    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:55.541189    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:55.541213    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:55.541166    8358 retry.go:31] will retry after 1.101264298s: waiting for domain to come up
	I0929 10:19:56.643818    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:56.644330    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:56.644369    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:56.644602    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:56.644625    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:56.644570    8358 retry.go:31] will retry after 1.643468675s: waiting for domain to come up
	I0929 10:19:58.290455    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:58.290889    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:58.290912    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:58.291164    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:58.291183    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:58.291128    8358 retry.go:31] will retry after 1.40280966s: waiting for domain to come up
	I0929 10:19:59.695464    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:59.695974    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:59.695992    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:59.696272    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:59.696323    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:59.696265    8358 retry.go:31] will retry after 1.862603319s: waiting for domain to come up
	I0929 10:20:01.561785    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:01.562380    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:20:01.562407    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:20:01.562655    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:20:01.562683    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:20:01.562634    8358 retry.go:31] will retry after 2.941456391s: waiting for domain to come up
	I0929 10:20:04.507942    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:04.508465    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:20:04.508487    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:20:04.508708    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:20:04.508754    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:20:04.508692    8358 retry.go:31] will retry after 3.063009242s: waiting for domain to come up
	I0929 10:20:07.575419    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:07.575975    8330 main.go:141] libmachine: (addons-911532) found domain IP: 192.168.39.179
	I0929 10:20:07.575990    8330 main.go:141] libmachine: (addons-911532) reserving static IP address...
	I0929 10:20:07.575998    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has current primary IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:07.576366    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find host DHCP lease matching {name: "addons-911532", mac: "52:54:00:96:11:56", ip: "192.168.39.179"} in network mk-addons-911532
	I0929 10:20:07.774232    8330 main.go:141] libmachine: (addons-911532) DBG | Getting to WaitForSSH function...
	I0929 10:20:07.774263    8330 main.go:141] libmachine: (addons-911532) reserved static IP address 192.168.39.179 for domain addons-911532
	I0929 10:20:07.774309    8330 main.go:141] libmachine: (addons-911532) waiting for SSH...
	I0929 10:20:07.777412    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:07.777949    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:minikube Clientid:01:52:54:00:96:11:56}
	I0929 10:20:07.777974    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:07.778160    8330 main.go:141] libmachine: (addons-911532) DBG | Using SSH client type: external
	I0929 10:20:07.778178    8330 main.go:141] libmachine: (addons-911532) DBG | Using SSH private key: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa (-rw-------)
	I0929 10:20:07.778240    8330 main.go:141] libmachine: (addons-911532) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0929 10:20:07.778264    8330 main.go:141] libmachine: (addons-911532) DBG | About to run SSH command:
	I0929 10:20:07.778276    8330 main.go:141] libmachine: (addons-911532) DBG | exit 0
	I0929 10:20:07.917138    8330 main.go:141] libmachine: (addons-911532) DBG | SSH cmd err, output: <nil>: 
	I0929 10:20:07.917411    8330 main.go:141] libmachine: (addons-911532) domain creation complete
	I0929 10:20:07.917792    8330 main.go:141] libmachine: (addons-911532) Calling .GetConfigRaw
	I0929 10:20:07.918434    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:07.918664    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:07.918846    8330 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0929 10:20:07.918860    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:07.920305    8330 main.go:141] libmachine: Detecting operating system of created instance...
	I0929 10:20:07.920320    8330 main.go:141] libmachine: Waiting for SSH to be available...
	I0929 10:20:07.920325    8330 main.go:141] libmachine: Getting to WaitForSSH function...
	I0929 10:20:07.920330    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:07.922896    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:07.923256    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:07.923281    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:07.923438    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:07.923635    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:07.923781    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:07.923951    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:07.924122    8330 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:07.924327    8330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0929 10:20:07.924337    8330 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0929 10:20:08.032128    8330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 10:20:08.032150    8330 main.go:141] libmachine: Detecting the provisioner...
	I0929 10:20:08.032158    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.035150    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.035650    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.035676    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.035849    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.036023    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.036162    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.036310    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.036503    8330 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:08.036699    8330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0929 10:20:08.036709    8330 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0929 10:20:08.146139    8330 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0929 10:20:08.146218    8330 main.go:141] libmachine: found compatible host: buildroot
	I0929 10:20:08.146225    8330 main.go:141] libmachine: Provisioning with buildroot...
	I0929 10:20:08.146232    8330 main.go:141] libmachine: (addons-911532) Calling .GetMachineName
	I0929 10:20:08.146517    8330 buildroot.go:166] provisioning hostname "addons-911532"
	I0929 10:20:08.146546    8330 main.go:141] libmachine: (addons-911532) Calling .GetMachineName
	I0929 10:20:08.146724    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.149534    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.149903    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.149931    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.150079    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.150261    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.150452    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.150570    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.150709    8330 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:08.150906    8330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0929 10:20:08.150918    8330 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-911532 && echo "addons-911532" | sudo tee /etc/hostname
	I0929 10:20:08.278974    8330 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-911532
	
	I0929 10:20:08.279001    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.282211    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.282657    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.282689    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.282950    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.283137    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.283318    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.283463    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.283602    8330 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:08.283817    8330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0929 10:20:08.283855    8330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-911532' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-911532/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-911532' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 10:20:08.400849    8330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 10:20:08.400874    8330 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21657-3816/.minikube CaCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21657-3816/.minikube}
	I0929 10:20:08.400909    8330 buildroot.go:174] setting up certificates
	I0929 10:20:08.400922    8330 provision.go:84] configureAuth start
	I0929 10:20:08.400933    8330 main.go:141] libmachine: (addons-911532) Calling .GetMachineName
	I0929 10:20:08.401221    8330 main.go:141] libmachine: (addons-911532) Calling .GetIP
	I0929 10:20:08.404488    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.404861    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.404881    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.405105    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.407451    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.407783    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.407808    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.408007    8330 provision.go:143] copyHostCerts
	I0929 10:20:08.408072    8330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/ca.pem (1082 bytes)
	I0929 10:20:08.408347    8330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/cert.pem (1123 bytes)
	I0929 10:20:08.408478    8330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/key.pem (1679 bytes)
	I0929 10:20:08.408562    8330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca-key.pem org=jenkins.addons-911532 san=[127.0.0.1 192.168.39.179 addons-911532 localhost minikube]
	I0929 10:20:08.457469    8330 provision.go:177] copyRemoteCerts
	I0929 10:20:08.457527    8330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 10:20:08.457548    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.460625    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.460962    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.460991    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.461153    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.461390    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.461509    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.461643    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:08.546790    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 10:20:08.577312    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 10:20:08.607181    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 10:20:08.636055    8330 provision.go:87] duration metric: took 235.1207ms to configureAuth
	I0929 10:20:08.636085    8330 buildroot.go:189] setting minikube options for container-runtime
	I0929 10:20:08.636280    8330 config.go:182] Loaded profile config "addons-911532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:20:08.636388    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.639147    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.639482    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.639525    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.639765    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.639937    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.640129    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.640246    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.640408    8330 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:08.640614    8330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0929 10:20:08.640629    8330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 10:20:08.884944    8330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 10:20:08.884967    8330 main.go:141] libmachine: Checking connection to Docker...
	I0929 10:20:08.884977    8330 main.go:141] libmachine: (addons-911532) Calling .GetURL
	I0929 10:20:08.886395    8330 main.go:141] libmachine: (addons-911532) DBG | using libvirt version 8000000
	I0929 10:20:08.888906    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.889281    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.889309    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.889489    8330 main.go:141] libmachine: Docker is up and running!
	I0929 10:20:08.889503    8330 main.go:141] libmachine: Reticulating splines...
	I0929 10:20:08.889509    8330 client.go:171] duration metric: took 19.140044962s to LocalClient.Create
	I0929 10:20:08.889527    8330 start.go:167] duration metric: took 19.140101533s to libmachine.API.Create "addons-911532"
	I0929 10:20:08.889535    8330 start.go:293] postStartSetup for "addons-911532" (driver="kvm2")
	I0929 10:20:08.889546    8330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 10:20:08.889561    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:08.889787    8330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 10:20:08.889810    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.893400    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.893828    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.893850    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.893987    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.894222    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.894407    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.894549    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:08.979409    8330 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 10:20:08.984274    8330 info.go:137] Remote host: Buildroot 2025.02
	I0929 10:20:08.984296    8330 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-3816/.minikube/addons for local assets ...
	I0929 10:20:08.984377    8330 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-3816/.minikube/files for local assets ...
	I0929 10:20:08.984400    8330 start.go:296] duration metric: took 94.85978ms for postStartSetup
	I0929 10:20:08.984429    8330 main.go:141] libmachine: (addons-911532) Calling .GetConfigRaw
	I0929 10:20:08.985063    8330 main.go:141] libmachine: (addons-911532) Calling .GetIP
	I0929 10:20:08.987970    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.988332    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.988371    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.988631    8330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/config.json ...
	I0929 10:20:08.988817    8330 start.go:128] duration metric: took 19.255225953s to createHost
	I0929 10:20:08.988846    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.991306    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.991862    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.991889    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.992056    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.992222    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.992394    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.992520    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.992681    8330 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:08.992946    8330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0929 10:20:08.992962    8330 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0929 10:20:09.100129    8330 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759141209.059279000
	
	I0929 10:20:09.100152    8330 fix.go:216] guest clock: 1759141209.059279000
	I0929 10:20:09.100159    8330 fix.go:229] Guest: 2025-09-29 10:20:09.059279 +0000 UTC Remote: 2025-09-29 10:20:08.988831556 +0000 UTC m=+19.364626106 (delta=70.447444ms)
	I0929 10:20:09.100191    8330 fix.go:200] guest clock delta is within tolerance: 70.447444ms
	I0929 10:20:09.100196    8330 start.go:83] releasing machines lock for "addons-911532", held for 19.366681656s
	I0929 10:20:09.100216    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:09.100557    8330 main.go:141] libmachine: (addons-911532) Calling .GetIP
	I0929 10:20:09.103690    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:09.104033    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:09.104062    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:09.104246    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:09.104743    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:09.104923    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:09.105046    8330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 10:20:09.105097    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:09.105112    8330 ssh_runner.go:195] Run: cat /version.json
	I0929 10:20:09.105130    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:09.108069    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:09.108119    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:09.108464    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:09.108488    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:09.108512    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:09.108534    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:09.108734    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:09.108749    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:09.108912    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:09.108926    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:09.109101    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:09.109113    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:09.109256    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:09.109260    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:09.216417    8330 ssh_runner.go:195] Run: systemctl --version
	I0929 10:20:09.222846    8330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 10:20:09.384636    8330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0929 10:20:09.391852    8330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0929 10:20:09.391906    8330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 10:20:09.412791    8330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0929 10:20:09.412813    8330 start.go:495] detecting cgroup driver to use...
	I0929 10:20:09.412882    8330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 10:20:09.432417    8330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 10:20:09.448433    8330 docker.go:218] disabling cri-docker service (if available) ...
	I0929 10:20:09.448494    8330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 10:20:09.465964    8330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 10:20:09.481975    8330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 10:20:09.629225    8330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 10:20:09.840833    8330 docker.go:234] disabling docker service ...
	I0929 10:20:09.840898    8330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 10:20:09.858103    8330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 10:20:09.872733    8330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 10:20:10.028160    8330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 10:20:10.170725    8330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 10:20:10.186498    8330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 10:20:10.208790    8330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 10:20:10.208840    8330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.221373    8330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 10:20:10.221427    8330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.233339    8330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.245762    8330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.257848    8330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 10:20:10.270858    8330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.283122    8330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.304068    8330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.316039    8330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 10:20:10.326321    8330 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0929 10:20:10.326388    8330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0929 10:20:10.348550    8330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 10:20:10.361988    8330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:20:10.507746    8330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 10:20:10.612811    8330 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 10:20:10.612899    8330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 10:20:10.618569    8330 start.go:563] Will wait 60s for crictl version
	I0929 10:20:10.618625    8330 ssh_runner.go:195] Run: which crictl
	I0929 10:20:10.622944    8330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 10:20:10.665514    8330 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0929 10:20:10.665614    8330 ssh_runner.go:195] Run: crio --version
	I0929 10:20:10.694916    8330 ssh_runner.go:195] Run: crio --version
	I0929 10:20:10.724814    8330 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0929 10:20:10.726157    8330 main.go:141] libmachine: (addons-911532) Calling .GetIP
	I0929 10:20:10.729133    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:10.729545    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:10.729575    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:10.729788    8330 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0929 10:20:10.734601    8330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:20:10.750745    8330 kubeadm.go:875] updating cluster {Name:addons-911532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-911
532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 10:20:10.750830    8330 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:20:10.750873    8330 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 10:20:10.786965    8330 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0929 10:20:10.787034    8330 ssh_runner.go:195] Run: which lz4
	I0929 10:20:10.791694    8330 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0929 10:20:10.796598    8330 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0929 10:20:10.796640    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0929 10:20:12.287040    8330 crio.go:462] duration metric: took 1.495381435s to copy over tarball
	I0929 10:20:12.287115    8330 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0929 10:20:13.904851    8330 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.617709548s)
	I0929 10:20:13.904878    8330 crio.go:469] duration metric: took 1.617810623s to extract the tarball
	I0929 10:20:13.904887    8330 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0929 10:20:13.946333    8330 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 10:20:13.991640    8330 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 10:20:13.991663    8330 cache_images.go:85] Images are preloaded, skipping loading
	I0929 10:20:13.991671    8330 kubeadm.go:926] updating node { 192.168.39.179 8443 v1.34.0 crio true true} ...
	I0929 10:20:13.991761    8330 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-911532 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-911532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 10:20:13.991839    8330 ssh_runner.go:195] Run: crio config
	I0929 10:20:14.038150    8330 cni.go:84] Creating CNI manager for ""
	I0929 10:20:14.038169    8330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 10:20:14.038180    8330 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 10:20:14.038198    8330 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.179 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-911532 NodeName:addons-911532 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 10:20:14.038300    8330 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-911532"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.179"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.179"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 10:20:14.038381    8330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 10:20:14.053651    8330 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 10:20:14.053724    8330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 10:20:14.068031    8330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0929 10:20:14.092020    8330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 10:20:14.116202    8330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I0929 10:20:14.140056    8330 ssh_runner.go:195] Run: grep 192.168.39.179	control-plane.minikube.internal$ /etc/hosts
	I0929 10:20:14.144733    8330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:20:14.159800    8330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:20:14.314527    8330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:20:14.337683    8330 certs.go:68] Setting up /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532 for IP: 192.168.39.179
	I0929 10:20:14.337707    8330 certs.go:194] generating shared ca certs ...
	I0929 10:20:14.337743    8330 certs.go:226] acquiring lock for ca certs: {Name:mk991a8b4541d4c7b4b7bab2e7dfb0450ec66a3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:14.337913    8330 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21657-3816/.minikube/ca.key
	I0929 10:20:14.828624    8330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt ...
	I0929 10:20:14.828656    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt: {Name:mk605d19c615ec63bb49553d32d16a9968996447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:14.828869    8330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-3816/.minikube/ca.key ...
	I0929 10:20:14.828887    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/ca.key: {Name:mk116fbaf9146e252d64c98b19fb4d5d877a65f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:14.828995    8330 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.key
	I0929 10:20:15.061750    8330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.crt ...
	I0929 10:20:15.061779    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.crt: {Name:mk3eeeaec93a3e580abc1a0f8721c39cfd08ef60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.061960    8330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.key ...
	I0929 10:20:15.061975    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.key: {Name:mkc397709470903133ba0b5a62b9ca66bd0144de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.062076    8330 certs.go:256] generating profile certs ...
	I0929 10:20:15.062154    8330 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.key
	I0929 10:20:15.062173    8330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt with IP's: []
	I0929 10:20:15.253281    8330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt ...
	I0929 10:20:15.253313    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: {Name:mkb6d93d9208f1e65858ef821a0bf2997c10f2f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.253506    8330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.key ...
	I0929 10:20:15.253523    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.key: {Name:mk3162bfdf768dab29342cf9830ff9fd4702cb96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.253628    8330 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.key.bf65b89f
	I0929 10:20:15.253656    8330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.crt.bf65b89f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.179]
	I0929 10:20:15.479023    8330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.crt.bf65b89f ...
	I0929 10:20:15.479053    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.crt.bf65b89f: {Name:mkae8e94bfacd54df10c2599ebed7801d300337d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.479223    8330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.key.bf65b89f ...
	I0929 10:20:15.479241    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.key.bf65b89f: {Name:mk28de5248c1f787c9e307292da7671529b3c8bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.479345    8330 certs.go:381] copying /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.crt.bf65b89f -> /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.crt
	I0929 10:20:15.479457    8330 certs.go:385] copying /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.key.bf65b89f -> /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.key
	I0929 10:20:15.479530    8330 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.key
	I0929 10:20:15.479554    8330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.crt with IP's: []
	I0929 10:20:15.890186    8330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.crt ...
	I0929 10:20:15.890217    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.crt: {Name:mk8d6457a0876ed0180e350f3cff3f286feaeb73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.890408    8330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.key ...
	I0929 10:20:15.890424    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.key: {Name:mk5fa1c5bb7ab27f1723ebd353f821745dcf151a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.890613    8330 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 10:20:15.890663    8330 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem (1082 bytes)
	I0929 10:20:15.890698    8330 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem (1123 bytes)
	I0929 10:20:15.890741    8330 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/key.pem (1679 bytes)
	I0929 10:20:15.891316    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 10:20:15.938903    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 10:20:15.978982    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 10:20:16.009727    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 10:20:16.039344    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 10:20:16.070479    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 10:20:16.101539    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 10:20:16.131091    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 10:20:16.161171    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 10:20:16.190550    8330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 10:20:16.210923    8330 ssh_runner.go:195] Run: openssl version
	I0929 10:20:16.217450    8330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 10:20:16.231199    8330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:20:16.236531    8330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:20:16.236589    8330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:20:16.244248    8330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 10:20:16.258217    8330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 10:20:16.263250    8330 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 10:20:16.263302    8330 kubeadm.go:392] StartCluster: {Name:addons-911532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-911532
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:20:16.263401    8330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 10:20:16.263469    8330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 10:20:16.311031    8330 cri.go:89] found id: ""
	I0929 10:20:16.311136    8330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 10:20:16.324180    8330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 10:20:16.335996    8330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 10:20:16.348491    8330 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 10:20:16.348510    8330 kubeadm.go:157] found existing configuration files:
	
	I0929 10:20:16.348558    8330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 10:20:16.359693    8330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 10:20:16.359754    8330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 10:20:16.371848    8330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 10:20:16.382965    8330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 10:20:16.383055    8330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 10:20:16.395004    8330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 10:20:16.405764    8330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 10:20:16.405833    8330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 10:20:16.417554    8330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 10:20:16.428340    8330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 10:20:16.428405    8330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 10:20:16.439786    8330 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0929 10:20:16.601410    8330 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 10:20:29.233520    8330 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 10:20:29.233611    8330 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 10:20:29.233698    8330 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 10:20:29.233818    8330 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 10:20:29.233926    8330 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 10:20:29.233987    8330 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 10:20:29.236675    8330 out.go:252]   - Generating certificates and keys ...
	I0929 10:20:29.236749    8330 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 10:20:29.236804    8330 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 10:20:29.236891    8330 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 10:20:29.236989    8330 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 10:20:29.237083    8330 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 10:20:29.237156    8330 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 10:20:29.237245    8330 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 10:20:29.237406    8330 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-911532 localhost] and IPs [192.168.39.179 127.0.0.1 ::1]
	I0929 10:20:29.237472    8330 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 10:20:29.237610    8330 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-911532 localhost] and IPs [192.168.39.179 127.0.0.1 ::1]
	I0929 10:20:29.237672    8330 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 10:20:29.237726    8330 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 10:20:29.237792    8330 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 10:20:29.237868    8330 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 10:20:29.237928    8330 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 10:20:29.237983    8330 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 10:20:29.238037    8330 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 10:20:29.238094    8330 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 10:20:29.238141    8330 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 10:20:29.238212    8330 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 10:20:29.238272    8330 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 10:20:29.239488    8330 out.go:252]   - Booting up control plane ...
	I0929 10:20:29.239556    8330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 10:20:29.239621    8330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 10:20:29.239677    8330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 10:20:29.239796    8330 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 10:20:29.239908    8330 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 10:20:29.240017    8330 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 10:20:29.240091    8330 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 10:20:29.240132    8330 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 10:20:29.240245    8330 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 10:20:29.240338    8330 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 10:20:29.240414    8330 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.500993452s
	I0929 10:20:29.240491    8330 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 10:20:29.240576    8330 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.179:8443/livez
	I0929 10:20:29.240647    8330 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 10:20:29.240713    8330 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 10:20:29.240773    8330 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.605979769s
	I0929 10:20:29.240827    8330 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.265600399s
	I0929 10:20:29.240895    8330 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.001411979s
	I0929 10:20:29.241002    8330 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 10:20:29.241131    8330 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 10:20:29.241217    8330 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 10:20:29.241415    8330 kubeadm.go:310] [mark-control-plane] Marking the node addons-911532 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 10:20:29.241473    8330 kubeadm.go:310] [bootstrap-token] Using token: xpmnvs.em3s359nhdig9yyg
	I0929 10:20:29.243962    8330 out.go:252]   - Configuring RBAC rules ...
	I0929 10:20:29.244057    8330 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 10:20:29.244129    8330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 10:20:29.244271    8330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 10:20:29.244454    8330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 10:20:29.244608    8330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 10:20:29.244721    8330 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 10:20:29.244831    8330 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 10:20:29.244870    8330 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 10:20:29.244921    8330 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 10:20:29.244927    8330 kubeadm.go:310] 
	I0929 10:20:29.244982    8330 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 10:20:29.244987    8330 kubeadm.go:310] 
	I0929 10:20:29.245051    8330 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 10:20:29.245057    8330 kubeadm.go:310] 
	I0929 10:20:29.245078    8330 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 10:20:29.245167    8330 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 10:20:29.245249    8330 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 10:20:29.245259    8330 kubeadm.go:310] 
	I0929 10:20:29.245332    8330 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 10:20:29.245343    8330 kubeadm.go:310] 
	I0929 10:20:29.245425    8330 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 10:20:29.245437    8330 kubeadm.go:310] 
	I0929 10:20:29.245517    8330 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 10:20:29.245623    8330 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 10:20:29.245684    8330 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 10:20:29.245691    8330 kubeadm.go:310] 
	I0929 10:20:29.245784    8330 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 10:20:29.245882    8330 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 10:20:29.245889    8330 kubeadm.go:310] 
	I0929 10:20:29.245989    8330 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xpmnvs.em3s359nhdig9yyg \
	I0929 10:20:29.246119    8330 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fdcfa3247e581ebf0f11f1ff8ec879a8ec01cf6ce10faea278bc7fcbbc98f689 \
	I0929 10:20:29.246143    8330 kubeadm.go:310] 	--control-plane 
	I0929 10:20:29.246149    8330 kubeadm.go:310] 
	I0929 10:20:29.246228    8330 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 10:20:29.246239    8330 kubeadm.go:310] 
	I0929 10:20:29.246310    8330 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xpmnvs.em3s359nhdig9yyg \
	I0929 10:20:29.246451    8330 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fdcfa3247e581ebf0f11f1ff8ec879a8ec01cf6ce10faea278bc7fcbbc98f689 
	I0929 10:20:29.246468    8330 cni.go:84] Creating CNI manager for ""
	I0929 10:20:29.246477    8330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 10:20:29.248668    8330 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 10:20:29.249832    8330 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 10:20:29.264165    8330 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 10:20:29.287307    8330 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 10:20:29.287371    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:29.287441    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-911532 minikube.k8s.io/updated_at=2025_09_29T10_20_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c703192fb7638284bed1945941837d6f5d9e8170 minikube.k8s.io/name=addons-911532 minikube.k8s.io/primary=true
	I0929 10:20:29.333982    8330 ops.go:34] apiserver oom_adj: -16
	I0929 10:20:29.443148    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:29.943547    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:30.443943    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:30.944035    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:31.443398    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:31.943338    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:32.443329    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:32.944216    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:33.443626    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:33.943212    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:34.443454    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:34.577904    8330 kubeadm.go:1105] duration metric: took 5.290578825s to wait for elevateKubeSystemPrivileges
	I0929 10:20:34.577946    8330 kubeadm.go:394] duration metric: took 18.314646355s to StartCluster
	I0929 10:20:34.577972    8330 settings.go:142] acquiring lock: {Name:mkbd44ffc9a24198fd299896a4cba1c423a77e63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:34.578089    8330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 10:20:34.578570    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/kubeconfig: {Name:mka4c30ad2429731194076d58cd88072dc744e8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:34.578797    8330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 10:20:34.578808    8330 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 10:20:34.578883    8330 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 10:20:34.578998    8330 config.go:182] Loaded profile config "addons-911532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:20:34.579013    8330 addons.go:69] Setting metrics-server=true in profile "addons-911532"
	I0929 10:20:34.579019    8330 addons.go:69] Setting inspektor-gadget=true in profile "addons-911532"
	I0929 10:20:34.579032    8330 addons.go:238] Setting addon metrics-server=true in "addons-911532"
	I0929 10:20:34.579001    8330 addons.go:69] Setting yakd=true in profile "addons-911532"
	I0929 10:20:34.579051    8330 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-911532"
	I0929 10:20:34.579058    8330 addons.go:69] Setting registry=true in profile "addons-911532"
	I0929 10:20:34.579072    8330 addons.go:69] Setting registry-creds=true in profile "addons-911532"
	I0929 10:20:34.579076    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579083    8330 addons.go:69] Setting ingress=true in profile "addons-911532"
	I0929 10:20:34.579081    8330 addons.go:69] Setting cloud-spanner=true in profile "addons-911532"
	I0929 10:20:34.579094    8330 addons.go:238] Setting addon ingress=true in "addons-911532"
	I0929 10:20:34.579096    8330 addons.go:238] Setting addon registry=true in "addons-911532"
	I0929 10:20:34.579103    8330 addons.go:238] Setting addon cloud-spanner=true in "addons-911532"
	I0929 10:20:34.579073    8330 addons.go:69] Setting default-storageclass=true in profile "addons-911532"
	I0929 10:20:34.579122    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579121    8330 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-911532"
	I0929 10:20:34.579135    8330 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-911532"
	I0929 10:20:34.579139    8330 addons.go:69] Setting ingress-dns=true in profile "addons-911532"
	I0929 10:20:34.579153    8330 addons.go:238] Setting addon ingress-dns=true in "addons-911532"
	I0929 10:20:34.579163    8330 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-911532"
	I0929 10:20:34.579173    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579182    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579066    8330 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-911532"
	I0929 10:20:34.579422    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579042    8330 addons.go:69] Setting storage-provisioner=true in profile "addons-911532"
	I0929 10:20:34.579481    8330 addons.go:238] Setting addon storage-provisioner=true in "addons-911532"
	I0929 10:20:34.579516    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579556    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579584    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579596    8330 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-911532"
	I0929 10:20:34.579608    8330 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-911532"
	I0929 10:20:34.579617    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.579621    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579642    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.579645    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579680    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.579042    8330 addons.go:238] Setting addon inspektor-gadget=true in "addons-911532"
	I0929 10:20:34.579704    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579864    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579866    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579902    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.579927    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.579956    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579976    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.580024    8330 addons.go:69] Setting volcano=true in profile "addons-911532"
	I0929 10:20:34.579130    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.580036    8330 addons.go:238] Setting addon volcano=true in "addons-911532"
	I0929 10:20:34.580046    8330 addons.go:69] Setting volumesnapshots=true in profile "addons-911532"
	I0929 10:20:34.580056    8330 addons.go:238] Setting addon volumesnapshots=true in "addons-911532"
	I0929 10:20:34.579063    8330 addons.go:238] Setting addon yakd=true in "addons-911532"
	I0929 10:20:34.579586    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.580102    8330 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-911532"
	I0929 10:20:34.579127    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.580205    8330 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-911532"
	I0929 10:20:34.579104    8330 addons.go:238] Setting addon registry-creds=true in "addons-911532"
	I0929 10:20:34.580465    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.580663    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.580700    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579074    8330 addons.go:69] Setting gcp-auth=true in profile "addons-911532"
	I0929 10:20:34.580761    8330 mustload.go:65] Loading cluster: addons-911532
	I0929 10:20:34.580485    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.580518    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.581600    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.581630    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.580542    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.580556    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.582054    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.582079    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.582213    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.582242    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.582457    8330 out.go:179] * Verifying Kubernetes components...
	I0929 10:20:34.580566    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.580580    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.580599    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.582793    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.584547    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.584595    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.586549    8330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:20:34.587657    8330 config.go:182] Loaded profile config "addons-911532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:20:34.587871    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.587947    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.588033    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.588105    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.589680    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.589749    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.611209    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33569
	I0929 10:20:34.619982    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39891
	I0929 10:20:34.620045    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33517
	I0929 10:20:34.620051    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0929 10:20:34.619982    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.620679    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.620992    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.621009    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.621801    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.621914    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46753
	I0929 10:20:34.621956    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37655
	I0929 10:20:34.622631    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.622650    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.623029    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.623106    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35273
	I0929 10:20:34.623707    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.623823    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.623840    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.623952    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.623963    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.624510    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.624527    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.624583    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.624625    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.625263    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.625789    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.625829    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.626174    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.626661    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.626678    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.627096    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.627432    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.627595    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.627607    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.627652    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.627682    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.627733    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.627744    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.628166    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.628190    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.628220    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.628314    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.628759    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.628788    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.629020    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.629055    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.631879    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37641
	I0929 10:20:34.632376    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.632705    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46009
	I0929 10:20:34.633030    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.633048    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.633193    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.633230    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.633267    8330 addons.go:238] Setting addon default-storageclass=true in "addons-911532"
	I0929 10:20:34.633652    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.633800    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.634170    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.634207    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.635813    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.635852    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.636152    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.636325    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0929 10:20:34.636872    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.637313    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.637328    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.642530    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.642548    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.642626    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.642679    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35179
	I0929 10:20:34.643872    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.644142    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.644246    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.644288    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.645594    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36547
	I0929 10:20:34.648922    8330 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-911532"
	I0929 10:20:34.649021    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.649433    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.649468    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.648943    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.652866    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.653073    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.653088    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.653480    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34279
	I0929 10:20:34.653596    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.654397    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.654434    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.654714    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38051
	I0929 10:20:34.654720    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.654766    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.654784    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.655230    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.655412    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.655448    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.655888    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.655923    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.656194    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.656228    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.656428    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.657115    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.657140    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.660741    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.661324    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.661373    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.664929    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.665442    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33747
	I0929 10:20:34.665663    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45979
	I0929 10:20:34.666958    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.666976    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.667484    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.667511    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.667663    8330 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 10:20:34.668039    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.668186    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.668825    8330 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 10:20:34.668844    8330 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 10:20:34.668864    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.670363    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38703
	I0929 10:20:34.670492    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45327
	I0929 10:20:34.670589    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.670638    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.670685    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44323
	I0929 10:20:34.670850    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.671069    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.673465    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34519
	I0929 10:20:34.673527    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.674063    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.674096    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.674977    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34045
	I0929 10:20:34.675676    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.676230    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.676248    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.676307    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.676719    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.677275    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.677317    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.677523    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.678840    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.678928    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.678990    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.679041    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.679058    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.679469    8330 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 10:20:34.680842    8330 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:20:34.680869    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 10:20:34.680887    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.682698    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.682719    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
	I0929 10:20:34.682798    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.682814    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.682799    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.682873    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.682971    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.683566    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.683632    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.683639    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.683654    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.683726    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.683774    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.683785    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.683941    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.684015    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.684089    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.684161    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.684441    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.684455    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.684741    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.684802    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.684849    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.684894    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43831
	I0929 10:20:34.685225    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.685265    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.685603    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.685635    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.685757    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.686288    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.686328    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.687002    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.687029    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.690223    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.693652    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.693704    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.698952    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45605
	I0929 10:20:34.698970    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39949
	I0929 10:20:34.698972    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.699009    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.698972    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.699052    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.699072    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.698956    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.699670    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.699705    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.700063    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.700153    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.700166    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.700208    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.700218    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.700345    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.700526    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.701231    8330 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 10:20:34.701911    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.701977    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.702057    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 10:20:34.702426    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.702172    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.702205    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.702855    8330 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:20:34.703378    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 10:20:34.703399    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.704803    8330 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 10:20:34.704895    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 10:20:34.705477    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44235
	I0929 10:20:34.705978    8330 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 10:20:34.705994    8330 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 10:20:34.706011    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.708737    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 10:20:34.709962    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 10:20:34.711332    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 10:20:34.711651    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.711697    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0929 10:20:34.711872    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42395
	I0929 10:20:34.711919    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.712201    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.712421    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.712506    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.712521    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.712998    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.713202    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.713218    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.713266    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.713854    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
	I0929 10:20:34.713974    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.714080    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.714091    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.714089    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 10:20:34.714230    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.715079    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.715142    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.715220    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.715297    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.715368    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.715956    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.716009    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.716125    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.716175    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40095
	I0929 10:20:34.716205    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.716294    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.716343    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36303
	I0929 10:20:34.716378    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.716486    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.716488    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.716500    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 10:20:34.716534    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.716848    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.716857    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.717024    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.717298    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.717928    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.718122    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0929 10:20:34.718584    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.718977    8330 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 10:20:34.719471    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.719488    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.719792    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.719808    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.719952    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 10:20:34.720195    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.719597    8330 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 10:20:34.720392    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.720598    8330 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:20:34.720616    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 10:20:34.720632    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.720636    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.720067    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.720145    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.720684    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.721147    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.721261    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 10:20:34.721272    8330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 10:20:34.721286    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.721295    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.721304    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.721329    8330 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:20:34.721337    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.721343    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 10:20:34.721370    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.721378    8330 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 10:20:34.721386    8330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 10:20:34.721397    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.722081    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.722147    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.722188    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.722501    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.722717    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.722815    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40771
	I0929 10:20:34.723931    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.724477    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.724627    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45741
	I0929 10:20:34.724682    8330 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 10:20:34.725137    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I0929 10:20:34.725214    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.725408    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.725474    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.725712    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.725963    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.725985    8330 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 10:20:34.726200    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.726227    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.726409    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.726429    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.726650    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.726822    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.727082    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.727129    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.727533    8330 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 10:20:34.727533    8330 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 10:20:34.727652    8330 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 10:20:34.727676    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.728686    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.728766    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.729230    8330 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 10:20:34.729245    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 10:20:34.729261    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.730397    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.730781    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.731393    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.731820    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.732216    8330 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 10:20:34.732339    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:34.732658    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:34.732406    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.732428    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.732749    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.732857    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 10:20:34.733003    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:34.733015    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.733085    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:34.733094    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:34.733106    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:34.733113    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:34.733174    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.733327    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.733400    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:34.733408    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	W0929 10:20:34.733499    8330 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0929 10:20:34.733798    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.733801    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.733912    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.734054    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.734644    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.734341    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.734688    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.734709    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.734754    8330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 10:20:34.734762    8330 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 10:20:34.734774    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.735299    8330 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:20:34.735491    8330 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 10:20:34.735536    8330 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 10:20:34.735635    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.735893    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.736417    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.736504    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.736524    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.736551    8330 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 10:20:34.736683    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.736733    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 10:20:34.736746    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.736864    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.737094    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.737173    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.737498    8330 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:20:34.737512    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 10:20:34.737529    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.738046    8330 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:20:34.738231    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.738250    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.739179    8330 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:20:34.739195    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 10:20:34.739209    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.739655    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.740103    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.740604    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.740967    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.740970    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.741030    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.741379    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.741614    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.741632    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.741788    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.742109    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.742129    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.742150    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.742161    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.742421    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.742535    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.742802    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.742930    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.743127    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.743456    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.743674    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.743699    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.743973    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.744132    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.744133    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.744187    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.744304    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.744456    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.744462    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.744601    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.744706    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.744809    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.745109    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.745491    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.745518    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.745796    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.745998    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.746170    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.746303    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.746890    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.747330    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.747407    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.747570    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.747612    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37067
	I0929 10:20:34.747719    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.747882    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.747955    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.748060    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.748397    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.748421    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.748773    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.749012    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.750457    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.752008    8330 out.go:179]   - Using image docker.io/busybox:stable
	I0929 10:20:34.753202    8330 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 10:20:34.754342    8330 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:20:34.754377    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 10:20:34.754395    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.757852    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.758255    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.758330    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.758551    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.758744    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.758881    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.759050    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	W0929 10:20:35.042687    8330 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47666->192.168.39.179:22: read: connection reset by peer
	I0929 10:20:35.042728    8330 retry.go:31] will retry after 227.252154ms: ssh: handshake failed: read tcp 192.168.39.1:47666->192.168.39.179:22: read: connection reset by peer
	W0929 10:20:35.046188    8330 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47680->192.168.39.179:22: read: connection reset by peer
	I0929 10:20:35.046216    8330 retry.go:31] will retry after 146.732464ms: ssh: handshake failed: read tcp 192.168.39.1:47680->192.168.39.179:22: read: connection reset by peer
	I0929 10:20:35.540872    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:20:35.579899    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:20:35.660053    8330 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 10:20:35.660086    8330 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 10:20:35.675711    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 10:20:35.683986    8330 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:35.684010    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 10:20:35.740542    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 10:20:35.740565    8330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 10:20:35.747876    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:20:35.761273    8330 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 10:20:35.761301    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 10:20:35.864047    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:20:35.966173    8330 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.387341194s)
	I0929 10:20:35.966224    8330 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.379651941s)
	I0929 10:20:35.966281    8330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:20:35.966363    8330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 10:20:35.991879    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 10:20:36.019637    8330 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 10:20:36.019659    8330 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 10:20:36.122486    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:20:36.211453    8330 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 10:20:36.211479    8330 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 10:20:36.220363    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:20:36.238690    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:36.284452    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:20:36.301479    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 10:20:36.301501    8330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 10:20:36.312324    8330 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 10:20:36.312347    8330 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 10:20:36.401460    8330 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:20:36.401485    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 10:20:36.408098    8330 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 10:20:36.408119    8330 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 10:20:36.602526    8330 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 10:20:36.602552    8330 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 10:20:36.629597    8330 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:20:36.629620    8330 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 10:20:36.659489    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 10:20:36.659518    8330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 10:20:36.760787    8330 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 10:20:36.760817    8330 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 10:20:36.780734    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:20:36.980282    8330 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:20:36.980312    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 10:20:37.019180    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 10:20:37.019209    8330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 10:20:37.067476    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:20:37.210287    8330 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 10:20:37.210314    8330 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 10:20:37.370170    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 10:20:37.370205    8330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 10:20:37.411611    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:20:37.615958    8330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 10:20:37.615977    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 10:20:37.626251    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 10:20:37.626289    8330 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 10:20:37.851163    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.310253621s)
	I0929 10:20:37.851224    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:37.851237    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:37.851589    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:37.851612    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:37.851627    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:37.851636    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:37.851934    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:37.851969    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:37.851975    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:38.121335    8330 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:20:38.121366    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 10:20:38.153983    8330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 10:20:38.154019    8330 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 10:20:38.462249    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:20:38.490038    8330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 10:20:38.490067    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 10:20:38.882899    8330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 10:20:38.882924    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 10:20:39.175979    8330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:20:39.176000    8330 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 10:20:39.522531    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:20:40.536771    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.956838267s)
	I0929 10:20:40.536814    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.536829    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.536835    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.861093026s)
	I0929 10:20:40.536874    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.536892    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.537112    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.537122    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.537133    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.537139    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.537144    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.537149    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.537151    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.537158    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.539079    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.539085    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.539093    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.539101    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.539082    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.539102    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.645111    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.645134    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.645420    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.645437    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.794330    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.046421969s)
	I0929 10:20:40.794394    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794407    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.794407    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.93033074s)
	I0929 10:20:40.794439    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794453    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.794500    8330 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.828203764s)
	I0929 10:20:40.794545    8330 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.828162665s)
	I0929 10:20:40.794560    8330 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0929 10:20:40.794605    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.80268956s)
	I0929 10:20:40.794635    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794647    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.794795    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.794805    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.794814    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794820    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.794832    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.794834    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.794845    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.794854    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794862    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.794873    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.794895    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.794902    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.794910    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794917    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.794917    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.672242746s)
	I0929 10:20:40.794943    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794952    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.795217    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.795243    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.795265    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.795271    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.795395    8330 node_ready.go:35] waiting up to 6m0s for node "addons-911532" to be "Ready" ...
	I0929 10:20:40.795495    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.795525    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.795533    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.795542    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.795549    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.795622    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.795630    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.795919    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.795972    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.797514    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.797521    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.797532    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.815143    8330 node_ready.go:49] node "addons-911532" is "Ready"
	I0929 10:20:40.815165    8330 node_ready.go:38] duration metric: took 19.750953ms for node "addons-911532" to be "Ready" ...
	I0929 10:20:40.815177    8330 api_server.go:52] waiting for apiserver process to appear ...
	I0929 10:20:40.815221    8330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:20:41.364748    8330 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-911532" context rescaled to 1 replicas
	I0929 10:20:42.085122    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.864720869s)
	I0929 10:20:42.085215    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:42.085224    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:42.085491    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:42.085509    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:42.085519    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:42.085526    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:42.085859    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:42.085876    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:42.085859    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:42.176567    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.937842433s)
	W0929 10:20:42.176609    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:42.176627    8330 retry.go:31] will retry after 344.433489ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:42.229614    8330 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 10:20:42.229647    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:42.233209    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:42.233765    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:42.233790    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:42.234014    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:42.234217    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:42.234390    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:42.234549    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:42.363888    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:42.363918    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:42.364176    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:42.364191    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:42.402322    8330 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 10:20:42.497253    8330 addons.go:238] Setting addon gcp-auth=true in "addons-911532"
	I0929 10:20:42.497305    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:42.497617    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:42.497656    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:42.511982    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34373
	I0929 10:20:42.512604    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:42.513162    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:42.513187    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:42.513517    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:42.514096    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:42.514143    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:42.521475    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:42.527839    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34943
	I0929 10:20:42.528255    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:42.528790    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:42.528815    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:42.529201    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:42.529440    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:42.531322    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:42.531562    8330 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 10:20:42.531583    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:42.534916    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:42.535403    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:42.535429    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:42.535641    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:42.535801    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:42.535982    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:42.536112    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:43.911194    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.130428404s)
	I0929 10:20:43.911250    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911264    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911305    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.843789347s)
	I0929 10:20:43.911370    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911387    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911417    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.626934708s)
	I0929 10:20:43.911442    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911459    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911385    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.499749199s)
	I0929 10:20:43.911505    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911516    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.911518    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:43.911520    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911526    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.911535    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911543    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911569    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:43.911624    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.911642    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.911716    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911726    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911755    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.911766    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.911777    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911784    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911789    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.911796    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.911805    8330 addons.go:479] Verifying addon registry=true in "addons-911532"
	I0929 10:20:43.911889    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:43.911917    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.913725    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.913735    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.913745    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.914016    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:43.914032    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:43.914034    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.914043    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.914046    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.914052    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.914056    8330 addons.go:479] Verifying addon metrics-server=true in "addons-911532"
	I0929 10:20:43.914058    8330 addons.go:479] Verifying addon ingress=true in "addons-911532"
	I0929 10:20:43.914108    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.914456    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.914126    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:43.916045    8330 out.go:179] * Verifying registry addon...
	I0929 10:20:43.916966    8330 out.go:179] * Verifying ingress addon...
	I0929 10:20:43.916970    8330 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-911532 service yakd-dashboard -n yakd-dashboard
	
	I0929 10:20:43.918685    8330 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 10:20:43.919216    8330 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 10:20:43.932029    8330 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 10:20:43.932051    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:43.932389    8330 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 10:20:43.932401    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:44.445321    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:44.455769    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:44.974560    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:44.974637    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:45.197486    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.73519948s)
	W0929 10:20:45.197531    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:20:45.197552    8330 retry.go:31] will retry after 188.758064ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:20:45.197780    8330 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.382549144s)
	I0929 10:20:45.197804    8330 api_server.go:72] duration metric: took 10.618970714s to wait for apiserver process to appear ...
	I0929 10:20:45.197812    8330 api_server.go:88] waiting for apiserver healthz status ...
	I0929 10:20:45.197833    8330 api_server.go:253] Checking apiserver healthz at https://192.168.39.179:8443/healthz ...
	I0929 10:20:45.197777    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.675200772s)
	I0929 10:20:45.197918    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:45.197936    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:45.198196    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:45.198209    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:45.198225    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:45.198240    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:45.198251    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:45.198499    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:45.198512    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:45.198521    8330 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-911532"
	I0929 10:20:45.200264    8330 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 10:20:45.202570    8330 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 10:20:45.239947    8330 api_server.go:279] https://192.168.39.179:8443/healthz returned 200:
	ok
	I0929 10:20:45.262006    8330 api_server.go:141] control plane version: v1.34.0
	I0929 10:20:45.262038    8330 api_server.go:131] duration metric: took 64.218943ms to wait for apiserver health ...
	I0929 10:20:45.262051    8330 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 10:20:45.279433    8330 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 10:20:45.279463    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:45.334344    8330 system_pods.go:59] 20 kube-system pods found
	I0929 10:20:45.334413    8330 system_pods.go:61] "amd-gpu-device-plugin-jh557" [5db58f7c-939d-4f8a-ad56-5e623bd97274] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:20:45.334425    8330 system_pods.go:61] "coredns-66bc5c9577-2lxh5" [f4a50ee5-9d06-48e9-aeec-8e8fedfd92b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:20:45.334435    8330 system_pods.go:61] "coredns-66bc5c9577-kjfp7" [70196c9f-e851-4e0a-9bad-67ee23312de9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:20:45.334444    8330 system_pods.go:61] "csi-hostpath-attacher-0" [b9fd31a0-37e1-4eec-a97f-a060c1a18bea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:20:45.334456    8330 system_pods.go:61] "csi-hostpath-resizer-0" [638e6c12-0662-47eb-8929-2e5ad0475f5e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:20:45.334471    8330 system_pods.go:61] "csi-hostpathplugin-zrj57" [69f029db-1f0a-43b2-9640-cbdc71a7e26d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:20:45.334480    8330 system_pods.go:61] "etcd-addons-911532" [2ce145a3-4923-438d-b404-82561b587638] Running
	I0929 10:20:45.334486    8330 system_pods.go:61] "kube-apiserver-addons-911532" [a51ab0b2-0bff-45cd-be40-63eda67672a3] Running
	I0929 10:20:45.334491    8330 system_pods.go:61] "kube-controller-manager-addons-911532" [17397601-4bd1-4692-8e05-335fc4806674] Running
	I0929 10:20:45.334500    8330 system_pods.go:61] "kube-ingress-dns-minikube" [3a756c7b-7c15-49df-8410-36c37bdf4785] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:20:45.334505    8330 system_pods.go:61] "kube-proxy-zhcch" [abca3b04-811d-4342-831f-4568c9eb2ee7] Running
	I0929 10:20:45.334513    8330 system_pods.go:61] "kube-scheduler-addons-911532" [4d96f119-c772-497f-a863-d6357e0e0e44] Running
	I0929 10:20:45.334517    8330 system_pods.go:61] "metrics-server-85b7d694d7-c25dl" [6e7da679-c6f1-46e2-9b63-41ed0241a079] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:20:45.334528    8330 system_pods.go:61] "nvidia-device-plugin-daemonset-f6jdr" [4ec65e75-eb10-4514-befa-234528f55822] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:20:45.334537    8330 system_pods.go:61] "registry-66898fdd98-jqjcd" [0c88f6a7-9d7a-40eb-a93a-59bc1e285db9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:20:45.334549    8330 system_pods.go:61] "registry-creds-764b6fb674-xbt6z" [0c2222bf-5153-4d50-b96c-0a6faff0930f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:20:45.334559    8330 system_pods.go:61] "registry-proxy-2jwvb" [79fc320c-8be7-4196-9a5d-2c15ae47e503] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:20:45.334565    8330 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bx82z" [9010bb12-b7f9-43a6-85cc-4ea055c57a89] Pending
	I0929 10:20:45.334571    8330 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ldkqf" [b56211c7-445f-47bc-979d-e6fb7ecca920] Pending
	I0929 10:20:45.334578    8330 system_pods.go:61] "storage-provisioner" [03841ce7-2069-4447-8adf-81b1e5233916] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:20:45.334589    8330 system_pods.go:74] duration metric: took 72.532335ms to wait for pod list to return data ...
	I0929 10:20:45.334601    8330 default_sa.go:34] waiting for default service account to be created ...
	I0929 10:20:45.386874    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:20:45.438919    8330 default_sa.go:45] found service account: "default"
	I0929 10:20:45.438959    8330 default_sa.go:55] duration metric: took 104.351561ms for default service account to be created ...
	I0929 10:20:45.438970    8330 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 10:20:45.479205    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:45.479375    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:45.504498    8330 system_pods.go:86] 20 kube-system pods found
	I0929 10:20:45.504542    8330 system_pods.go:89] "amd-gpu-device-plugin-jh557" [5db58f7c-939d-4f8a-ad56-5e623bd97274] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:20:45.504556    8330 system_pods.go:89] "coredns-66bc5c9577-2lxh5" [f4a50ee5-9d06-48e9-aeec-8e8fedfd92b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:20:45.504572    8330 system_pods.go:89] "coredns-66bc5c9577-kjfp7" [70196c9f-e851-4e0a-9bad-67ee23312de9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:20:45.504584    8330 system_pods.go:89] "csi-hostpath-attacher-0" [b9fd31a0-37e1-4eec-a97f-a060c1a18bea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:20:45.504598    8330 system_pods.go:89] "csi-hostpath-resizer-0" [638e6c12-0662-47eb-8929-2e5ad0475f5e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:20:45.504609    8330 system_pods.go:89] "csi-hostpathplugin-zrj57" [69f029db-1f0a-43b2-9640-cbdc71a7e26d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:20:45.504620    8330 system_pods.go:89] "etcd-addons-911532" [2ce145a3-4923-438d-b404-82561b587638] Running
	I0929 10:20:45.504627    8330 system_pods.go:89] "kube-apiserver-addons-911532" [a51ab0b2-0bff-45cd-be40-63eda67672a3] Running
	I0929 10:20:45.504638    8330 system_pods.go:89] "kube-controller-manager-addons-911532" [17397601-4bd1-4692-8e05-335fc4806674] Running
	I0929 10:20:45.504647    8330 system_pods.go:89] "kube-ingress-dns-minikube" [3a756c7b-7c15-49df-8410-36c37bdf4785] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:20:45.504655    8330 system_pods.go:89] "kube-proxy-zhcch" [abca3b04-811d-4342-831f-4568c9eb2ee7] Running
	I0929 10:20:45.504662    8330 system_pods.go:89] "kube-scheduler-addons-911532" [4d96f119-c772-497f-a863-d6357e0e0e44] Running
	I0929 10:20:45.504674    8330 system_pods.go:89] "metrics-server-85b7d694d7-c25dl" [6e7da679-c6f1-46e2-9b63-41ed0241a079] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:20:45.504685    8330 system_pods.go:89] "nvidia-device-plugin-daemonset-f6jdr" [4ec65e75-eb10-4514-befa-234528f55822] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:20:45.504698    8330 system_pods.go:89] "registry-66898fdd98-jqjcd" [0c88f6a7-9d7a-40eb-a93a-59bc1e285db9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:20:45.504712    8330 system_pods.go:89] "registry-creds-764b6fb674-xbt6z" [0c2222bf-5153-4d50-b96c-0a6faff0930f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:20:45.504724    8330 system_pods.go:89] "registry-proxy-2jwvb" [79fc320c-8be7-4196-9a5d-2c15ae47e503] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:20:45.504734    8330 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bx82z" [9010bb12-b7f9-43a6-85cc-4ea055c57a89] Pending
	I0929 10:20:45.504746    8330 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ldkqf" [b56211c7-445f-47bc-979d-e6fb7ecca920] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:20:45.504759    8330 system_pods.go:89] "storage-provisioner" [03841ce7-2069-4447-8adf-81b1e5233916] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:20:45.504773    8330 system_pods.go:126] duration metric: took 65.795363ms to wait for k8s-apps to be running ...
	I0929 10:20:45.504787    8330 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 10:20:45.504845    8330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:20:45.714542    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:45.928522    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:45.929140    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:46.136638    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.615124231s)
	W0929 10:20:46.136687    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:46.136709    8330 retry.go:31] will retry after 424.774106ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:46.136723    8330 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.605137457s)
	I0929 10:20:46.138626    8330 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:20:46.139865    8330 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 10:20:46.140982    8330 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 10:20:46.141003    8330 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 10:20:46.207677    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:46.212782    8330 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 10:20:46.212807    8330 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 10:20:46.366549    8330 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:20:46.366571    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 10:20:46.428820    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:46.428931    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:46.438908    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:20:46.561803    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:46.711871    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:46.927480    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:46.927570    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:47.210898    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:47.425645    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:47.426862    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:47.619932    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.233004041s)
	I0929 10:20:47.619964    8330 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.115094401s)
	I0929 10:20:47.619993    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:47.620010    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:47.620013    8330 system_svc.go:56] duration metric: took 2.115222945s WaitForService to wait for kubelet
	I0929 10:20:47.620026    8330 kubeadm.go:578] duration metric: took 13.041192565s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:20:47.620054    8330 node_conditions.go:102] verifying NodePressure condition ...
	I0929 10:20:47.620300    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:47.620344    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:47.620369    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:47.620383    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:47.620401    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:47.620637    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:47.620655    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:47.627713    8330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0929 10:20:47.627742    8330 node_conditions.go:123] node cpu capacity is 2
	I0929 10:20:47.627760    8330 node_conditions.go:105] duration metric: took 7.699657ms to run NodePressure ...
	I0929 10:20:47.627774    8330 start.go:241] waiting for startup goroutines ...
	I0929 10:20:47.711789    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:47.936879    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:47.936886    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:48.243761    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:48.409409    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.970463476s)
	I0929 10:20:48.409454    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:48.409465    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:48.409848    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:48.409869    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:48.409871    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:48.409880    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:48.409889    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:48.410156    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:48.410172    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:48.411269    8330 addons.go:479] Verifying addon gcp-auth=true in "addons-911532"
	I0929 10:20:48.412822    8330 out.go:179] * Verifying gcp-auth addon...
	I0929 10:20:48.415066    8330 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 10:20:48.435583    8330 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 10:20:48.435609    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:48.444290    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:48.444495    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:48.711086    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:48.926706    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:48.926805    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:48.928639    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:49.215777    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:49.345459    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.783617228s)
	W0929 10:20:49.345502    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:49.345521    8330 retry.go:31] will retry after 771.396332ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:49.427174    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:49.427499    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:49.430561    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:49.718587    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:49.920192    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:49.923406    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:49.929629    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:50.117584    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:50.213086    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:50.424674    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:50.428302    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:50.428402    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:50.711184    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:50.920140    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:50.925731    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:50.928955    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:51.148250    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.030628865s)
	W0929 10:20:51.148302    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:51.148324    8330 retry.go:31] will retry after 576.274213ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:51.211066    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:51.423094    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:51.427282    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:51.429044    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:51.713135    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:51.725183    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:51.924229    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:51.924401    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:51.930896    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:52.209703    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:52.421865    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:52.425402    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:52.428630    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:52.716412    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:52.924295    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:52.930265    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:52.930335    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:52.936143    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.210924841s)
	W0929 10:20:52.936185    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:52.936205    8330 retry.go:31] will retry after 1.374220476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:53.207601    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:53.421623    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:53.424423    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:53.425168    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:53.716959    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:53.924543    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:53.924591    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:53.924737    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:54.206885    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:54.311018    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:54.419619    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:54.424155    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:54.425928    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:54.711437    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:54.921635    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:54.923109    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:54.923875    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:55.207886    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:55.357008    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.045956607s)
	W0929 10:20:55.357041    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:55.357056    8330 retry.go:31] will retry after 2.584738248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:55.419277    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:55.423271    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:55.425958    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:55.771885    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:55.922759    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:55.925311    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:55.926888    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:56.286209    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:56.421963    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:56.425255    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:56.427805    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:56.711210    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:56.919760    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:56.923081    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:56.925860    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:57.208042    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:57.421946    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:57.425265    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:57.425867    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:57.707061    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:57.929800    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:57.930205    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:57.931973    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:57.942181    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:58.207102    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:58.423712    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:58.423755    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:58.427125    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:58.715894    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:58.918954    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:58.921183    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:58.923721    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:59.059080    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.116858718s)
	W0929 10:20:59.059141    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:59.059166    8330 retry.go:31] will retry after 1.942151479s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:59.209232    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:59.417948    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:59.429985    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:59.430010    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:00.130362    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:00.130976    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:00.132182    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:00.132787    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:00.228828    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:00.419020    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:00.421809    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:00.424680    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:00.709229    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:00.927517    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:00.928518    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:00.928523    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:01.001724    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:01.208275    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:01.419888    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:01.428910    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:01.429180    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:01.708863    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:01.920044    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:01.923338    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:01.926834    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 10:21:01.985595    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:01.985631    8330 retry.go:31] will retry after 3.874793998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:02.207338    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:02.419005    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:02.423832    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:02.425188    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:02.710318    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:02.919221    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:02.922831    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:02.925818    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:03.211916    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:03.421799    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:03.423873    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:03.425858    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:03.707940    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:03.918761    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:03.924771    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:03.925496    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:04.208373    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:04.427530    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:04.427562    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:04.429185    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:04.711395    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:04.918946    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:04.922890    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:04.925419    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:05.207717    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:05.425588    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:05.426139    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:05.428064    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:05.709966    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:05.861215    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:05.919835    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:05.925204    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:05.925220    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:06.512873    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:06.512876    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:06.512941    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:06.513032    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:06.712945    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:06.919940    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:06.927065    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:06.928484    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:07.092306    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.231046214s)
	W0929 10:21:07.092346    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:07.092387    8330 retry.go:31] will retry after 5.851261749s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:07.210508    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:07.421136    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:07.424149    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:07.424367    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:07.709771    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:07.920164    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:07.925061    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:07.928279    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:08.220428    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:08.419698    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:08.423421    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:08.427645    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:08.714820    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:08.919380    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:08.924174    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:08.926180    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:09.210300    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:09.418857    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:09.422339    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:09.423046    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:09.711312    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:09.920056    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:09.925490    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:09.925515    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:10.207095    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:10.425993    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:10.426301    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:10.426888    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:10.708041    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:10.921163    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:10.923488    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:10.925261    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:11.211024    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:11.422876    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:11.426400    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:11.428603    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:11.709665    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:11.919412    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:11.925463    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:11.929002    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:12.209928    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:12.420018    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:12.424532    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:12.425138    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:12.710157    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:12.920343    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:12.925416    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:12.926144    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:12.944295    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:13.208230    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:13.420309    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:13.424729    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:13.425970    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:13.710892    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:13.844128    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:13.844162    8330 retry.go:31] will retry after 11.364763944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:13.918763    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:13.922860    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:13.923485    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:14.206401    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:14.418165    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:14.425970    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:14.426096    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:14.933764    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:14.937462    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:14.937474    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:14.937812    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:15.208057    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:15.418646    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:15.425269    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:15.425769    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:15.993595    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:15.997320    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:15.997530    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:15.997548    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:16.206772    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:16.422583    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:16.424335    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:16.426227    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:16.708097    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:16.921247    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:16.923984    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:16.925900    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:17.210604    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:17.419727    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:17.428991    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:17.429113    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:17.713728    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:17.929841    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:17.930573    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:17.933149    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:18.208428    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:18.420222    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:18.424398    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:18.424564    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:18.711774    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:18.918936    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:18.922240    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:18.923709    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:19.207800    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:19.419045    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:19.422805    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:19.422969    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:19.705451    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:19.918694    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:19.923618    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:19.924430    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:20.207194    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:20.424041    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:20.432156    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:20.434202    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:20.713518    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:20.921792    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:20.927184    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:20.927815    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:21.207457    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:21.418704    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:21.422991    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:21.425131    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:21.708372    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:21.924974    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:21.925102    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:21.925333    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:22.208676    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:22.418579    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:22.422645    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:22.424686    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:22.709484    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:22.926015    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:22.927557    8330 kapi.go:107] duration metric: took 39.008871236s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 10:21:22.929226    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:23.209576    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:23.425205    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:23.428082    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:23.714593    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:23.920363    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:23.924951    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:24.207552    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:24.420112    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:24.424479    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:24.707639    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:24.922839    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:24.923981    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:25.209524    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:25.391829    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:25.419769    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:25.423811    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:25.709920    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:25.919838    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:25.922426    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:26.207779    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:26.300301    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.090742353s)
	W0929 10:21:26.300347    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:26.300372    8330 retry.go:31] will retry after 12.261050049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:26.418609    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:26.425516    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:26.709030    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:26.920490    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:26.923303    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:27.210832    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:27.419571    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:27.423843    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:27.717343    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:27.920068    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:27.929499    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:28.213205    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:28.420745    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:28.425514    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:28.715069    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:28.919315    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:28.924075    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:29.209126    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:29.418285    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:29.425171    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:29.722341    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:29.919736    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:29.924941    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:30.207130    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:30.421800    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:30.422894    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:30.712262    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:30.919477    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:30.922148    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:31.208448    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:31.418793    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:31.422244    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:31.711448    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:31.921287    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:31.923795    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:32.209904    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:32.419914    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:32.422336    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:32.711037    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:32.920967    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:32.928515    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:33.207431    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:33.419316    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:33.422381    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:33.709295    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:33.924149    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:33.928383    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:34.208000    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:34.428340    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:34.431876    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:34.709426    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:34.920188    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:34.924270    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:35.207181    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:35.418439    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:35.423100    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:35.707578    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:35.937088    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:35.939327    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:36.208907    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:36.420989    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:36.423616    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:36.708309    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:36.919632    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:36.924273    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:37.207435    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:37.419671    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:37.423102    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:37.783791    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:37.919989    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:37.924314    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:38.210022    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:38.420054    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:38.431837    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:38.562020    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:38.713780    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:38.923654    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:39.097166    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:39.208499    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:39.429072    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:39.429738    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:39.711870    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:39.726897    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.164840561s)
	W0929 10:21:39.726947    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:39.726967    8330 retry.go:31] will retry after 11.307676359s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:39.923119    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:39.930020    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:40.210041    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:40.420416    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:40.423961    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:40.709983    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:40.918532    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:40.921906    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:41.211550    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:41.419901    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:41.421841    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:41.710969    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:41.918815    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:41.923114    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:42.210789    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:42.421257    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:42.423834    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:42.711332    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:42.919390    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:42.923203    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:43.209065    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:43.418434    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:43.425216    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:43.710063    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:43.917640    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:43.922545    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:44.205527    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:44.418369    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:44.422405    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:44.712591    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:44.925166    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:44.926743    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:45.214074    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:45.418599    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:45.422428    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:45.713883    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:45.920464    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:45.923397    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:46.207761    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:46.424770    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:46.430331    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:46.708102    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:46.928807    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:46.930451    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:47.205481    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:47.418566    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:47.425398    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:47.713263    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:47.919750    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:47.923524    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:48.206758    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:48.419899    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:48.421913    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:48.711173    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:48.923285    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:48.923314    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:49.208056    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:49.419528    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:49.423287    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:49.711515    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:49.924180    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:49.925537    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:50.212106    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:50.419682    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:50.423313    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:50.716590    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:50.919524    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:50.922669    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:51.034797    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:51.209977    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:51.418761    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:51.424479    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:51.712918    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:51.923780    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:51.926533    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:52.208987    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:52.265550    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.230718165s)
	W0929 10:21:52.265592    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:52.265613    8330 retry.go:31] will retry after 29.631524393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:52.428241    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:52.428344    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:52.749549    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:52.921742    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:52.928462    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:53.207817    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:53.419516    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:53.423773    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:53.711799    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:53.920857    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:53.925608    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:54.206121    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:54.419654    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:54.424065    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:54.715431    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:54.920151    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:54.925741    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:55.212980    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:55.419636    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:55.423024    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:55.713534    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:55.925668    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:55.934020    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:56.245122    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:56.419044    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:56.422805    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:56.708253    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:56.922688    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:56.922921    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:57.212695    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:57.430279    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:57.435265    8330 kapi.go:107] duration metric: took 1m13.516044822s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 10:21:57.708402    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:57.924317    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:58.210469    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:58.418928    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:58.712217    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:58.918879    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:59.210802    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:59.421325    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:59.707536    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:59.923138    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:00.208005    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:00.419250    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:00.708379    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:00.918693    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:01.206545    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:01.418717    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:01.707897    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:01.924458    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:02.205991    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:02.419531    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:02.707091    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:02.918959    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:03.207504    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:03.419459    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:03.707093    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:03.919081    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:04.207001    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:04.418468    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:04.707785    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:04.918993    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:05.207795    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:05.418672    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:05.706790    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:05.920088    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:06.207438    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:06.418671    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:06.705954    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:06.919275    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:07.206855    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:07.418730    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:07.706264    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:07.918117    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:08.206783    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:08.426939    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:08.710678    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:08.918698    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:09.206327    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:09.418553    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:09.707129    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:09.918195    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:10.207272    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:10.418565    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:10.707124    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:10.919764    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:11.206241    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:11.418797    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:11.706944    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:11.919689    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:12.207328    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:12.418983    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:12.706788    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:12.919311    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:13.206761    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:13.419370    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:13.712805    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:13.919513    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:14.206504    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:14.418758    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:14.706621    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:14.918962    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:15.207334    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:15.419169    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:15.708290    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:15.918738    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:16.206832    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:16.419219    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:16.707913    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:16.919338    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:17.207062    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:17.418184    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:17.707167    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:17.918891    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:18.207006    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:18.418163    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:18.707075    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:18.919925    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:19.206556    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:19.418550    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:19.713091    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:19.920930    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:20.213277    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:20.421532    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:20.714653    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:20.919900    8330 kapi.go:107] duration metric: took 1m32.50483081s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 10:22:20.922981    8330 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-911532 cluster.
	I0929 10:22:20.924653    8330 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 10:22:20.926061    8330 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 10:22:21.207013    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:21.714545    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:21.897772    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:22:22.206398    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:22:22.599960    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:22:22.600034    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:22:22.600048    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:22:22.600335    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:22:22.600369    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:22:22.600380    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:22:22.600381    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:22:22.600387    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:22:22.600626    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:22:22.600645    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:22:22.600652    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	W0929 10:22:22.600742    8330 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 10:22:22.710659    8330 kapi.go:107] duration metric: took 1m37.508081362s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 10:22:22.712652    8330 out.go:179] * Enabled addons: amd-gpu-device-plugin, ingress-dns, default-storageclass, cloud-spanner, storage-provisioner, nvidia-device-plugin, registry-creds, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0929 10:22:22.713925    8330 addons.go:514] duration metric: took 1m48.135056911s for enable addons: enabled=[amd-gpu-device-plugin ingress-dns default-storageclass cloud-spanner storage-provisioner nvidia-device-plugin registry-creds storage-provisioner-rancher metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0929 10:22:22.713972    8330 start.go:246] waiting for cluster config update ...
	I0929 10:22:22.713998    8330 start.go:255] writing updated cluster config ...
	I0929 10:22:22.714320    8330 ssh_runner.go:195] Run: rm -f paused
	I0929 10:22:22.723573    8330 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:22:22.726685    8330 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2lxh5" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:22.731909    8330 pod_ready.go:94] pod "coredns-66bc5c9577-2lxh5" is "Ready"
	I0929 10:22:22.731936    8330 pod_ready.go:86] duration metric: took 5.225628ms for pod "coredns-66bc5c9577-2lxh5" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:22.733644    8330 pod_ready.go:83] waiting for pod "etcd-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:22.738810    8330 pod_ready.go:94] pod "etcd-addons-911532" is "Ready"
	I0929 10:22:22.738834    8330 pod_ready.go:86] duration metric: took 5.173944ms for pod "etcd-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:22.741797    8330 pod_ready.go:83] waiting for pod "kube-apiserver-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:22.754573    8330 pod_ready.go:94] pod "kube-apiserver-addons-911532" is "Ready"
	I0929 10:22:22.754598    8330 pod_ready.go:86] duration metric: took 12.780428ms for pod "kube-apiserver-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:22.758796    8330 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:23.128329    8330 pod_ready.go:94] pod "kube-controller-manager-addons-911532" is "Ready"
	I0929 10:22:23.128371    8330 pod_ready.go:86] duration metric: took 369.549352ms for pod "kube-controller-manager-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:23.328006    8330 pod_ready.go:83] waiting for pod "kube-proxy-zhcch" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:23.728722    8330 pod_ready.go:94] pod "kube-proxy-zhcch" is "Ready"
	I0929 10:22:23.728750    8330 pod_ready.go:86] duration metric: took 400.712378ms for pod "kube-proxy-zhcch" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:23.928748    8330 pod_ready.go:83] waiting for pod "kube-scheduler-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:24.327749    8330 pod_ready.go:94] pod "kube-scheduler-addons-911532" is "Ready"
	I0929 10:22:24.327772    8330 pod_ready.go:86] duration metric: took 399.002764ms for pod "kube-scheduler-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:24.327782    8330 pod_ready.go:40] duration metric: took 1.604186731s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:22:24.369933    8330 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 10:22:24.371860    8330 out.go:179] * Done! kubectl is now configured to use "addons-911532" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.493899640Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1aee3cc3-fa8c-4f75-b651-9ace9e2abd41 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.496091818Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca291973-6b99-4042-a006-046758cadc12 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.497608879Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759141924497586987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508783,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca291973-6b99-4042-a006-046758cadc12 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.498362203Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1991bc52-3aac-46a8-ad9c-90620f0875bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.498611292Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1991bc52-3aac-46a8-ad9c-90620f0875bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.499121016Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd2da61f9111a8f172a910334b72c950aad9cf7fcf0d041300bde9676dc9c4b5,PodSandboxId:760f3f111a462fe45783435331c2e5be1da2a299dca6f398620a88efd67623a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759141346666364450,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 50aa0ab4-8b35-4c2d-a178-4efae92e01df,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31c1763f6da5357250e1228bab85cc3d750958f66a3a5b7fd832b25bb0ff81c,PodSandboxId:03bb444700e14c181119a621393f5798c192136c811b6f3386b4b5152713ae09,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759141316748980793,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-vttt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2aad62c9-1c19-48f5-8b3c-05a46b75e030,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7dbc3a7ea7e456bf87d8426e18bc6eb1ad812d9efe8200d57fbf61c73a4d171e,PodSandboxId:6c52aed8c7fa63e3ca1db928ef45fc317c5c67533ca3212d1a21f5869230c6fb,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759141312626930590,
Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xljfq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c6e265ac-ca21-4ddc-9600-9f5c7a60fe39,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1184f2460f2693ea6f8a8cec74a31ec4b065b23d8b9efdcaf7d9eaca4bf56b99,PodSandboxId:26d005e1ee4992562de8fb92648009c0498759026fcf684e17b020f2022f85a0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_E
XITED,CreatedAt:1759141302712950200,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8bg4m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 67e735e2-cc42-4d83-8149-dff4c064e226,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65010026ccf4779ffbbf5a0d1b948ad224d2a7e064b4ef90af3448ede06a9ff,PodSandboxId:c415564a01e1fab92da8edae2e8824202bc486f37754027ab09d33eedd155c44,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc
7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759141286789793950,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-tp4c9,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: b33b4eee-87ed-427c-97fe-684dc1a39dc1,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb1fb889a566b019d028c434fcd1b749993ad201323e79f97aab274dfc347ce,PodSandboxId:6a9b5cb08e2bc5e57d63c8c6db0268901431aa3da3ac3e7f79e5bf4d64c54062,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759141279263661342,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a756c7b-7c15-49df-8410-36c37bdf4785,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b6f4ec2f78e909b787cbcfadf86a5962d851f2159dd1536bc864bb4c146942a,PodSandboxId:a5ffe00771c3b3619e024c11d22b51c4f3587f4c5bde7d6222f7c2b905b30476,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:
docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759141244627788814,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-jh557,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db58f7c-939d-4f8a-ad56-5e623bd97274,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8590713c2981f7e21a94ebe7a67b99f6cd9fe7a5b1d1e09f228f4b011567a991,PodSandboxId:38c60c0820a0d6aff995e82d2cefab3191781caeb135c427d83d8b51d8fd6bc8,Metadata:&ContainerMetadata{Name:storage-provision
er,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759141243642108766,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03841ce7-2069-4447-8adf-81b1e5233916,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c5c0be5e893e6cb715346a881e803fa92dd601e9a2829b7d1f07ac26f7787a,PodSandboxId:b478e3ec972282315c8ae9a1f15a19686b00bad35c1fddad651c6936db1c8618,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&Im
ageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759141235709566509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lxh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a50ee5-9d06-48e9-aeec-8e8fedfd92b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175a117fb6f06a3a250e33b7087fba88b740cfdf629e237f60ae0464b9de4eab,PodSandboxId:0d650e4b5f405a8659aec95c9a511629a431c4a60df6ab8393ac1713b86a6959,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759141235212839995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zhcch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abca3b04-811d-4342-831f-4568c9eb2ee7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a50327ef6012889c1d102209d8e88d4379ab8db2ce573d6b836416420edd50,PodSandboxId:04eeebd713634e07907eafd3a8303efc398fb4212e3caf61dddeace9c3777bf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759141222841471087,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edab1ff75c1cd7a0642fffd0b21cd736,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6dbae6113baa53e9504ec93e91af4dc56681d82f26ff33230ebb0ec68e7651,PodSandboxId:f208189bae6ea8042ad1470a0aa5d502dcf417de6417ddc74cbf1d8eb5ea4039,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759141222881788024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb644a85a1a2dd20a9929f14a1844358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostP
ort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7fd02945411862cbbf762bab42e24df4c87a418df8b35995e7dd8be37796636,PodSandboxId:2ab362827edd044925fd101b4d222362ad65e480d8d0f8a6f9691ad69dab263e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759141222851945611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f152e69a7a65e5947151db70e65d9f,},An
notations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00a42bfe385199d067828289bf42f54827d8c441368629a7bc1f630b335746e,PodSandboxId:4232352893b52fd8c9e6c7c3bbbab8d9a22c6dab5d90a4f5240097504f8391e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759141222827263618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-91
1532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf0001919057aab7c9bba4425845358c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1991bc52-3aac-46a8-ad9c-90620f0875bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.539835417Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=77407a4f-07b5-468e-bf77-2d6bb876481c name=/runtime.v1.RuntimeService/Version
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.540027815Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=77407a4f-07b5-468e-bf77-2d6bb876481c name=/runtime.v1.RuntimeService/Version
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.541438346Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e485f963-5c4a-4a3d-8f6b-958db6d1df63 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.542651045Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759141924542617080,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508783,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e485f963-5c4a-4a3d-8f6b-958db6d1df63 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.543423560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b75adddd-f4c5-4061-958b-d6fdf583b66b name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.543703028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b75adddd-f4c5-4061-958b-d6fdf583b66b name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.544263088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd2da61f9111a8f172a910334b72c950aad9cf7fcf0d041300bde9676dc9c4b5,PodSandboxId:760f3f111a462fe45783435331c2e5be1da2a299dca6f398620a88efd67623a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759141346666364450,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 50aa0ab4-8b35-4c2d-a178-4efae92e01df,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31c1763f6da5357250e1228bab85cc3d750958f66a3a5b7fd832b25bb0ff81c,PodSandboxId:03bb444700e14c181119a621393f5798c192136c811b6f3386b4b5152713ae09,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759141316748980793,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-vttt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2aad62c9-1c19-48f5-8b3c-05a46b75e030,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7dbc3a7ea7e456bf87d8426e18bc6eb1ad812d9efe8200d57fbf61c73a4d171e,PodSandboxId:6c52aed8c7fa63e3ca1db928ef45fc317c5c67533ca3212d1a21f5869230c6fb,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759141312626930590,
Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xljfq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c6e265ac-ca21-4ddc-9600-9f5c7a60fe39,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1184f2460f2693ea6f8a8cec74a31ec4b065b23d8b9efdcaf7d9eaca4bf56b99,PodSandboxId:26d005e1ee4992562de8fb92648009c0498759026fcf684e17b020f2022f85a0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_E
XITED,CreatedAt:1759141302712950200,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8bg4m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 67e735e2-cc42-4d83-8149-dff4c064e226,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65010026ccf4779ffbbf5a0d1b948ad224d2a7e064b4ef90af3448ede06a9ff,PodSandboxId:c415564a01e1fab92da8edae2e8824202bc486f37754027ab09d33eedd155c44,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc
7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759141286789793950,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-tp4c9,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: b33b4eee-87ed-427c-97fe-684dc1a39dc1,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb1fb889a566b019d028c434fcd1b749993ad201323e79f97aab274dfc347ce,PodSandboxId:6a9b5cb08e2bc5e57d63c8c6db0268901431aa3da3ac3e7f79e5bf4d64c54062,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759141279263661342,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a756c7b-7c15-49df-8410-36c37bdf4785,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b6f4ec2f78e909b787cbcfadf86a5962d851f2159dd1536bc864bb4c146942a,PodSandboxId:a5ffe00771c3b3619e024c11d22b51c4f3587f4c5bde7d6222f7c2b905b30476,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:
docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759141244627788814,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-jh557,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db58f7c-939d-4f8a-ad56-5e623bd97274,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8590713c2981f7e21a94ebe7a67b99f6cd9fe7a5b1d1e09f228f4b011567a991,PodSandboxId:38c60c0820a0d6aff995e82d2cefab3191781caeb135c427d83d8b51d8fd6bc8,Metadata:&ContainerMetadata{Name:storage-provision
er,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759141243642108766,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03841ce7-2069-4447-8adf-81b1e5233916,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c5c0be5e893e6cb715346a881e803fa92dd601e9a2829b7d1f07ac26f7787a,PodSandboxId:b478e3ec972282315c8ae9a1f15a19686b00bad35c1fddad651c6936db1c8618,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&Im
ageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759141235709566509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lxh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a50ee5-9d06-48e9-aeec-8e8fedfd92b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175a117fb6f06a3a250e33b7087fba88b740cfdf629e237f60ae0464b9de4eab,PodSandboxId:0d650e4b5f405a8659aec95c9a511629a431c4a60df6ab8393ac1713b86a6959,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759141235212839995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zhcch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abca3b04-811d-4342-831f-4568c9eb2ee7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a50327ef6012889c1d102209d8e88d4379ab8db2ce573d6b836416420edd50,PodSandboxId:04eeebd713634e07907eafd3a8303efc398fb4212e3caf61dddeace9c3777bf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759141222841471087,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edab1ff75c1cd7a0642fffd0b21cd736,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6dbae6113baa53e9504ec93e91af4dc56681d82f26ff33230ebb0ec68e7651,PodSandboxId:f208189bae6ea8042ad1470a0aa5d502dcf417de6417ddc74cbf1d8eb5ea4039,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759141222881788024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb644a85a1a2dd20a9929f14a1844358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostP
ort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7fd02945411862cbbf762bab42e24df4c87a418df8b35995e7dd8be37796636,PodSandboxId:2ab362827edd044925fd101b4d222362ad65e480d8d0f8a6f9691ad69dab263e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759141222851945611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f152e69a7a65e5947151db70e65d9f,},An
notations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00a42bfe385199d067828289bf42f54827d8c441368629a7bc1f630b335746e,PodSandboxId:4232352893b52fd8c9e6c7c3bbbab8d9a22c6dab5d90a4f5240097504f8391e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759141222827263618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-91
1532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf0001919057aab7c9bba4425845358c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b75adddd-f4c5-4061-958b-d6fdf583b66b name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.580886815Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b682e74a-8e51-4339-88ba-4d6d608e44c4 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.581009705Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b682e74a-8e51-4339-88ba-4d6d608e44c4 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.584569557Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e87edad6-31f4-4790-8d8d-8560d94b970f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.585881885Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759141924585858670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508783,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e87edad6-31f4-4790-8d8d-8560d94b970f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.586625948Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=236ca9fd-d945-4342-8183-d0c7d1da92a5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.586677416Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=236ca9fd-d945-4342-8183-d0c7d1da92a5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.587048539Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd2da61f9111a8f172a910334b72c950aad9cf7fcf0d041300bde9676dc9c4b5,PodSandboxId:760f3f111a462fe45783435331c2e5be1da2a299dca6f398620a88efd67623a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759141346666364450,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 50aa0ab4-8b35-4c2d-a178-4efae92e01df,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31c1763f6da5357250e1228bab85cc3d750958f66a3a5b7fd832b25bb0ff81c,PodSandboxId:03bb444700e14c181119a621393f5798c192136c811b6f3386b4b5152713ae09,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759141316748980793,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-vttt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2aad62c9-1c19-48f5-8b3c-05a46b75e030,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7dbc3a7ea7e456bf87d8426e18bc6eb1ad812d9efe8200d57fbf61c73a4d171e,PodSandboxId:6c52aed8c7fa63e3ca1db928ef45fc317c5c67533ca3212d1a21f5869230c6fb,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759141312626930590,
Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xljfq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c6e265ac-ca21-4ddc-9600-9f5c7a60fe39,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1184f2460f2693ea6f8a8cec74a31ec4b065b23d8b9efdcaf7d9eaca4bf56b99,PodSandboxId:26d005e1ee4992562de8fb92648009c0498759026fcf684e17b020f2022f85a0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_E
XITED,CreatedAt:1759141302712950200,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8bg4m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 67e735e2-cc42-4d83-8149-dff4c064e226,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65010026ccf4779ffbbf5a0d1b948ad224d2a7e064b4ef90af3448ede06a9ff,PodSandboxId:c415564a01e1fab92da8edae2e8824202bc486f37754027ab09d33eedd155c44,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc
7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759141286789793950,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-tp4c9,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: b33b4eee-87ed-427c-97fe-684dc1a39dc1,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb1fb889a566b019d028c434fcd1b749993ad201323e79f97aab274dfc347ce,PodSandboxId:6a9b5cb08e2bc5e57d63c8c6db0268901431aa3da3ac3e7f79e5bf4d64c54062,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759141279263661342,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a756c7b-7c15-49df-8410-36c37bdf4785,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b6f4ec2f78e909b787cbcfadf86a5962d851f2159dd1536bc864bb4c146942a,PodSandboxId:a5ffe00771c3b3619e024c11d22b51c4f3587f4c5bde7d6222f7c2b905b30476,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:
docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759141244627788814,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-jh557,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db58f7c-939d-4f8a-ad56-5e623bd97274,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8590713c2981f7e21a94ebe7a67b99f6cd9fe7a5b1d1e09f228f4b011567a991,PodSandboxId:38c60c0820a0d6aff995e82d2cefab3191781caeb135c427d83d8b51d8fd6bc8,Metadata:&ContainerMetadata{Name:storage-provision
er,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759141243642108766,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03841ce7-2069-4447-8adf-81b1e5233916,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c5c0be5e893e6cb715346a881e803fa92dd601e9a2829b7d1f07ac26f7787a,PodSandboxId:b478e3ec972282315c8ae9a1f15a19686b00bad35c1fddad651c6936db1c8618,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&Im
ageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759141235709566509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lxh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a50ee5-9d06-48e9-aeec-8e8fedfd92b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175a117fb6f06a3a250e33b7087fba88b740cfdf629e237f60ae0464b9de4eab,PodSandboxId:0d650e4b5f405a8659aec95c9a511629a431c4a60df6ab8393ac1713b86a6959,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759141235212839995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zhcch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abca3b04-811d-4342-831f-4568c9eb2ee7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a50327ef6012889c1d102209d8e88d4379ab8db2ce573d6b836416420edd50,PodSandboxId:04eeebd713634e07907eafd3a8303efc398fb4212e3caf61dddeace9c3777bf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759141222841471087,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edab1ff75c1cd7a0642fffd0b21cd736,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6dbae6113baa53e9504ec93e91af4dc56681d82f26ff33230ebb0ec68e7651,PodSandboxId:f208189bae6ea8042ad1470a0aa5d502dcf417de6417ddc74cbf1d8eb5ea4039,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759141222881788024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb644a85a1a2dd20a9929f14a1844358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostP
ort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7fd02945411862cbbf762bab42e24df4c87a418df8b35995e7dd8be37796636,PodSandboxId:2ab362827edd044925fd101b4d222362ad65e480d8d0f8a6f9691ad69dab263e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759141222851945611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f152e69a7a65e5947151db70e65d9f,},An
notations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00a42bfe385199d067828289bf42f54827d8c441368629a7bc1f630b335746e,PodSandboxId:4232352893b52fd8c9e6c7c3bbbab8d9a22c6dab5d90a4f5240097504f8391e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759141222827263618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-91
1532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf0001919057aab7c9bba4425845358c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=236ca9fd-d945-4342-8183-d0c7d1da92a5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.600588133Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99ae09fd-715a-4d87-8e81-4d036e451ae5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.601670721Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e0b064cccdd455f2a71356e8dcbb19f3970a181ce68d26c14e87f00d2739bd1f,Metadata:&PodSandboxMetadata{Name:nginx,Uid:c16b0297-3ef5-4961-9f5e-0019acc5ea5f,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759141443499497208,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c16b0297-3ef5-4961-9f5e-0019acc5ea5f,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-29T10:24:03.180744232Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3789f10f83a797b83bdc1fba00438e0b432e4dd54871ae9273fe6f76d46efb0f,Metadata:&PodSandboxMetadata{Name:task-pv-pod,Uid:19fbb660-be46-4ddb-af92-da7e55790348,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759141401288511226,Labels:map[string]string{app: task-pv
-pod,io.kubernetes.container.name: POD,io.kubernetes.pod.name: task-pv-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 19fbb660-be46-4ddb-af92-da7e55790348,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-29T10:23:20.966733534Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:760f3f111a462fe45783435331c2e5be1da2a299dca6f398620a88efd67623a7,Metadata:&PodSandboxMetadata{Name:busybox,Uid:50aa0ab4-8b35-4c2d-a178-4efae92e01df,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759141345300856254,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 50aa0ab4-8b35-4c2d-a178-4efae92e01df,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-29T10:22:24.975897483Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:03bb444700e14c181119a621393f5798c192136c811b6f3386b4b5152713ae09,Metadata:&PodSandboxMetadata{
Name:ingress-nginx-controller-9cc49f96f-vttt9,Uid:2aad62c9-1c19-48f5-8b3c-05a46b75e030,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759141307960485806,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-vttt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2aad62c9-1c19-48f5-8b3c-05a46b75e030,pod-template-hash: 9cc49f96f,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-29T10:20:43.538498112Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c415564a01e1fab92da8edae2e8824202bc486f37754027ab09d33eedd155c44,Metadata:&PodSandboxMetadata{Name:gadget-tp4c9,Uid:b33b4eee-87ed-427c-97fe-684dc1a39dc1,Namespace:gadget,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759141243650915759,Labels:map[string]string{controller-revision-hash: 5d99b94
fd5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gadget-tp4c9,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: b33b4eee-87ed-427c-97fe-684dc1a39dc1,k8s-app: gadget,pod-template-generation: 1,},Annotations:map[string]string{container.apparmor.security.beta.kubernetes.io/gadget: unconfined,kubernetes.io/config.seen: 2025-09-29T10:20:42.421747015Z,kubernetes.io/config.source: api,prometheus.io/path: /metrics,prometheus.io/port: 2223,prometheus.io/scrape: true,},RuntimeHandler:,},&PodSandbox{Id:6a9b5cb08e2bc5e57d63c8c6db0268901431aa3da3ac3e7f79e5bf4d64c54062,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:3a756c7b-7c15-49df-8410-36c37bdf4785,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759141241615548730,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a756c7b-7c15-49d
f-8410-36c37bdf4785,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"name\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-i
ngress-dns-config-volume\"}]}}\n,kubernetes.io/config.seen: 2025-09-29T10:20:40.375861106Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:38c60c0820a0d6aff995e82d2cefab3191781caeb135c427d83d8b51d8fd6bc8,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:03841ce7-2069-4447-8adf-81b1e5233916,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759141241453295697,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03841ce7-2069-4447-8adf-81b1e5233916,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":
{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-09-29T10:20:40.764827278Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a5ffe00771c3b3619e024c11d22b51c4f3587f4c5bde7d6222f7c2b905b30476,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-jh557,Uid:5db58f7c-939d-4f8a-ad56-5e623bd97274,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759141238289915496,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-jh557,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db58f7c-939d-4f8a-ad56-5e
623bd97274,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-29T10:20:37.924476413Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0d650e4b5f405a8659aec95c9a511629a431c4a60df6ab8393ac1713b86a6959,Metadata:&PodSandboxMetadata{Name:kube-proxy-zhcch,Uid:abca3b04-811d-4342-831f-4568c9eb2ee7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759141234832448557,Labels:map[string]string{controller-revision-hash: 6f475c7966,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zhcch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abca3b04-811d-4342-831f-4568c9eb2ee7,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-29T10:20:33.836291613Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b478e3ec972282315c8ae9a1f15a19686b00bad35c1fddad651c6936db1c8618,Metadata:&PodSandbox
Metadata{Name:coredns-66bc5c9577-2lxh5,Uid:f4a50ee5-9d06-48e9-aeec-8e8fedfd92b5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759141234542717120,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-2lxh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a50ee5-9d06-48e9-aeec-8e8fedfd92b5,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-29T10:20:34.201001454Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4232352893b52fd8c9e6c7c3bbbab8d9a22c6dab5d90a4f5240097504f8391e9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-911532,Uid:bf0001919057aab7c9bba4425845358c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759141222614127024,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: bf0001919057aab7c9bba4425845358c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.179:8443,kubernetes.io/config.hash: bf0001919057aab7c9bba4425845358c,kubernetes.io/config.seen: 2025-09-29T10:20:21.482232950Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f208189bae6ea8042ad1470a0aa5d502dcf417de6417ddc74cbf1d8eb5ea4039,Metadata:&PodSandboxMetadata{Name:etcd-addons-911532,Uid:fb644a85a1a2dd20a9929f14a1844358,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759141222612871103,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb644a85a1a2dd20a9929f14a1844358,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.179:2379,kubernetes.io/config.hash: fb644a85a1a2dd20a9929f14a1844358,kubernetes.io/con
fig.seen: 2025-09-29T10:20:21.482231664Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2ab362827edd044925fd101b4d222362ad65e480d8d0f8a6f9691ad69dab263e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-911532,Uid:d2f152e69a7a65e5947151db70e65d9f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759141222612473580,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f152e69a7a65e5947151db70e65d9f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d2f152e69a7a65e5947151db70e65d9f,kubernetes.io/config.seen: 2025-09-29T10:20:21.482233797Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:04eeebd713634e07907eafd3a8303efc398fb4212e3caf61dddeace9c3777bf3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-911532,Uid:edab1ff75c1cd7a0642fffd0b21cd736
,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759141222596992123,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edab1ff75c1cd7a0642fffd0b21cd736,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: edab1ff75c1cd7a0642fffd0b21cd736,kubernetes.io/config.seen: 2025-09-29T10:20:21.482228725Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=99ae09fd-715a-4d87-8e81-4d036e451ae5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.606010508Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03670f96-7c19-4cc8-99ac-77ebae340d25 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.606089470Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03670f96-7c19-4cc8-99ac-77ebae340d25 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:32:04 addons-911532 crio[817]: time="2025-09-29 10:32:04.607542251Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd2da61f9111a8f172a910334b72c950aad9cf7fcf0d041300bde9676dc9c4b5,PodSandboxId:760f3f111a462fe45783435331c2e5be1da2a299dca6f398620a88efd67623a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759141346666364450,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 50aa0ab4-8b35-4c2d-a178-4efae92e01df,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31c1763f6da5357250e1228bab85cc3d750958f66a3a5b7fd832b25bb0ff81c,PodSandboxId:03bb444700e14c181119a621393f5798c192136c811b6f3386b4b5152713ae09,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759141316748980793,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-vttt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2aad62c9-1c19-48f5-8b3c-05a46b75e030,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d65010026ccf4779ffbbf5a0d1b948ad224d2a7e064b4ef90af3448ede06a9ff,PodSandboxId:c415564a01e1fab92da8edae2e8824202bc486f37754027ab09d33eedd155c44,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,Sta
te:CONTAINER_RUNNING,CreatedAt:1759141286789793950,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-tp4c9,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: b33b4eee-87ed-427c-97fe-684dc1a39dc1,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb1fb889a566b019d028c434fcd1b749993ad201323e79f97aab274dfc347ce,PodSandboxId:6a9b5cb08e2bc5e57d63c8c6db0268901431aa3da3ac3e7f79e5bf4d64c54062,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759141279263661342,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a756c7b-7c15-49df-8410-36c37bdf4785,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b6f4ec2f78e909b787cbcfadf86a5962d851f2159dd1536bc864bb4c146942a,PodSandboxId:a5ffe00771c3b3619e024c11d22b51c4f3587f4c5bde7d6222f7c2b905b30476,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/
k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759141244627788814,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-jh557,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db58f7c-939d-4f8a-ad56-5e623bd97274,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8590713c2981f7e21a94ebe7a67b99f6cd9fe7a5b1d1e09f228f4b011567a991,PodSandboxId:38c60c0820a0d6aff995e82d2cefab3191781caeb135c427d83d8b51d8fd6bc8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},
Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759141243642108766,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03841ce7-2069-4447-8adf-81b1e5233916,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c5c0be5e893e6cb715346a881e803fa92dd601e9a2829b7d1f07ac26f7787a,PodSandboxId:b478e3ec972282315c8ae9a1f15a19686b00bad35c1fddad651c6936db1c8618,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:5
2546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759141235709566509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lxh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a50ee5-9d06-48e9-aeec-8e8fedfd92b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175a117fb6f06a3a250e33b7087fba88b740cfdf629e237f60ae0464b9de4eab,PodSandboxId:0d650e4b5f405a8659aec95c9a511629a431c4a60df6ab8393ac1713b86a6959,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759141235212839995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zhcch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abca3b04-811d-4342-831f-4568c9eb2ee7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a50327ef6012889c1d102209d8e88d4379ab8db2ce573d6b836416420edd50,PodSandboxId:04eeebd713634e07907eafd3a8303efc398fb4212e3caf61dddeace9c3777bf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759141222841471087,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edab1ff75c1cd7a0642fffd0b21cd736,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6dbae6113baa53e9504ec93e91af4dc56681d82f26ff33230ebb0ec68e7651,PodSandboxId:f208189bae6ea8042ad1470a0aa5d502dcf417de6417ddc74cbf1d8eb5ea4039,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759141222881788024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb644a85a1a2dd20a9929f14a1844358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"co
ntainerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7fd02945411862cbbf762bab42e24df4c87a418df8b35995e7dd8be37796636,PodSandboxId:2ab362827edd044925fd101b4d222362ad65e480d8d0f8a6f9691ad69dab263e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759141222851945611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f152e69a7a65e5947151db70e65d9f,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00a42bfe385199d067828289bf42f54827d8c441368629a7bc1f630b335746e,PodSandboxId:4232352893b52fd8c9e6c7c3bbbab8d9a22c6dab5d90a4f5240097504f8391e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759141222827263618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-911532,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf0001919057aab7c9bba4425845358c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03670f96-7c19-4cc8-99ac-77ebae340d25 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dd2da61f9111a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          9 minutes ago       Running             busybox                   0                   760f3f111a462       busybox
	f31c1763f6da5       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             10 minutes ago      Running             controller                0                   03bb444700e14       ingress-nginx-controller-9cc49f96f-vttt9
	7dbc3a7ea7e45       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                             10 minutes ago      Exited              patch                     2                   6c52aed8c7fa6       ingress-nginx-admission-patch-xljfq
	1184f2460f269       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   10 minutes ago      Exited              create                    0                   26d005e1ee499       ingress-nginx-admission-create-8bg4m
	d65010026ccf4       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            10 minutes ago      Running             gadget                    0                   c415564a01e1f       gadget-tp4c9
	efb1fb889a566       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               10 minutes ago      Running             minikube-ingress-dns      0                   6a9b5cb08e2bc       kube-ingress-dns-minikube
	9b6f4ec2f78e9       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     11 minutes ago      Running             amd-gpu-device-plugin     0                   a5ffe00771c3b       amd-gpu-device-plugin-jh557
	8590713c2981f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             11 minutes ago      Running             storage-provisioner       0                   38c60c0820a0d       storage-provisioner
	b6c5c0be5e893       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             11 minutes ago      Running             coredns                   0                   b478e3ec97228       coredns-66bc5c9577-2lxh5
	175a117fb6f06       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             11 minutes ago      Running             kube-proxy                0                   0d650e4b5f405       kube-proxy-zhcch
	3b6dbae6113ba       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             11 minutes ago      Running             etcd                      0                   f208189bae6ea       etcd-addons-911532
	a7fd029454118       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             11 minutes ago      Running             kube-controller-manager   0                   2ab362827edd0       kube-controller-manager-addons-911532
	e0a50327ef601       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             11 minutes ago      Running             kube-scheduler            0                   04eeebd713634       kube-scheduler-addons-911532
	a00a42bfe3851       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             11 minutes ago      Running             kube-apiserver            0                   4232352893b52       kube-apiserver-addons-911532
	
	
	==> coredns [b6c5c0be5e893e6cb715346a881e803fa92dd601e9a2829b7d1f07ac26f7787a] <==
	[INFO] 10.244.0.8:50652 - 16984 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000291243s
	[INFO] 10.244.0.8:50652 - 50804 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000151578s
	[INFO] 10.244.0.8:50652 - 20738 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000103041s
	[INFO] 10.244.0.8:50652 - 42178 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000141825s
	[INFO] 10.244.0.8:50652 - 37241 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000104758s
	[INFO] 10.244.0.8:50652 - 56970 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00015054s
	[INFO] 10.244.0.8:50652 - 44050 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000117583s
	[INFO] 10.244.0.8:48716 - 14813 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000130702s
	[INFO] 10.244.0.8:48716 - 15156 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000208908s
	[INFO] 10.244.0.8:37606 - 64555 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000146123s
	[INFO] 10.244.0.8:37606 - 64844 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00012694s
	[INFO] 10.244.0.8:46483 - 39882 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094662s
	[INFO] 10.244.0.8:46483 - 40157 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000344836s
	[INFO] 10.244.0.8:39149 - 27052 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000128832s
	[INFO] 10.244.0.8:39149 - 26844 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000220783s
	[INFO] 10.244.0.23:43438 - 39803 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000622841s
	[INFO] 10.244.0.23:47210 - 22362 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000808655s
	[INFO] 10.244.0.23:54815 - 54620 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000102275s
	[INFO] 10.244.0.23:48706 - 23486 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000290579s
	[INFO] 10.244.0.23:35174 - 37530 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095187s
	[INFO] 10.244.0.23:58302 - 160 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000148316s
	[INFO] 10.244.0.23:60222 - 18112 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001543386s
	[INFO] 10.244.0.23:42303 - 24400 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.005221068s
	[INFO] 10.244.0.27:57662 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000379174s
	[INFO] 10.244.0.27:52524 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000831634s
	
	
	==> describe nodes <==
	Name:               addons-911532
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-911532
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c703192fb7638284bed1945941837d6f5d9e8170
	                    minikube.k8s.io/name=addons-911532
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_20_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-911532
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:20:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-911532
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 10:32:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 10:30:30 +0000   Mon, 29 Sep 2025 10:20:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 10:30:30 +0000   Mon, 29 Sep 2025 10:20:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 10:30:30 +0000   Mon, 29 Sep 2025 10:20:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 10:30:30 +0000   Mon, 29 Sep 2025 10:20:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.179
	  Hostname:    addons-911532
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c8a2bbd76874c1a8020738f402773b8
	  System UUID:                0c8a2bbd-7687-4c1a-8020-738f402773b8
	  Boot ID:                    9d51dc84-868d-42de-9a46-75702ae9a571
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m44s
	  gadget                      gadget-tp4c9                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-vttt9    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         11m
	  kube-system                 amd-gpu-device-plugin-jh557                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-2lxh5                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 etcd-addons-911532                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         11m
	  kube-system                 kube-apiserver-addons-911532                250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-911532       200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-zhcch                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-911532                100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node addons-911532 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node addons-911532 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node addons-911532 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node addons-911532 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node addons-911532 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node addons-911532 status is now: NodeHasSufficientPID
	  Normal  NodeReady                11m                kubelet          Node addons-911532 status is now: NodeReady
	  Normal  RegisteredNode           11m                node-controller  Node addons-911532 event: Registered Node addons-911532 in Controller
	
	
	==> dmesg <==
	[  +5.855059] kauditd_printk_skb: 20 callbacks suppressed
	[  +9.424741] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.533609] kauditd_printk_skb: 26 callbacks suppressed
	[  +8.677200] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.756729] kauditd_printk_skb: 57 callbacks suppressed
	[  +2.380361] kauditd_printk_skb: 115 callbacks suppressed
	[  +4.682193] kauditd_printk_skb: 120 callbacks suppressed
	[  +4.066585] kauditd_printk_skb: 83 callbacks suppressed
	[Sep29 10:22] kauditd_printk_skb: 11 callbacks suppressed
	[ +10.687590] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.000071] kauditd_printk_skb: 26 callbacks suppressed
	[ +12.038379] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000030] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.163052] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.578786] kauditd_printk_skb: 46 callbacks suppressed
	[Sep29 10:23] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.000124] kauditd_printk_skb: 22 callbacks suppressed
	[ +30.032876] kauditd_printk_skb: 26 callbacks suppressed
	[  +2.772049] kauditd_printk_skb: 107 callbacks suppressed
	[Sep29 10:24] kauditd_printk_skb: 54 callbacks suppressed
	[ +50.465336] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.000104] kauditd_printk_skb: 9 callbacks suppressed
	[Sep29 10:25] kauditd_printk_skb: 26 callbacks suppressed
	[Sep29 10:28] kauditd_printk_skb: 10 callbacks suppressed
	[Sep29 10:29] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [3b6dbae6113baa53e9504ec93e91af4dc56681d82f26ff33230ebb0ec68e7651] <==
	{"level":"info","ts":"2025-09-29T10:21:39.088425Z","caller":"traceutil/trace.go:172","msg":"trace[1920545131] linearizableReadLoop","detail":"{readStateIndex:1075; appliedIndex:1075; }","duration":"165.718933ms","start":"2025-09-29T10:21:38.922692Z","end":"2025-09-29T10:21:39.088411Z","steps":["trace[1920545131] 'read index received'  (duration: 165.713078ms)","trace[1920545131] 'applied index is now lower than readState.Index'  (duration: 5.095µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T10:21:39.088595Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.848818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:21:39.088625Z","caller":"traceutil/trace.go:172","msg":"trace[1181994063] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1047; }","duration":"165.92758ms","start":"2025-09-29T10:21:38.922688Z","end":"2025-09-29T10:21:39.088616Z","steps":["trace[1181994063] 'agreement among raft nodes before linearized reading'  (duration: 165.822615ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T10:21:39.089113Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"164.606269ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-xljfq\" limit:1 ","response":"range_response_count:1 size:4722"}
	{"level":"info","ts":"2025-09-29T10:21:39.089162Z","caller":"traceutil/trace.go:172","msg":"trace[517473847] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-xljfq; range_end:; response_count:1; response_revision:1048; }","duration":"164.659529ms","start":"2025-09-29T10:21:38.924494Z","end":"2025-09-29T10:21:39.089153Z","steps":["trace[517473847] 'agreement among raft nodes before linearized reading'  (duration: 164.533832ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:21:39.089291Z","caller":"traceutil/trace.go:172","msg":"trace[103671638] transaction","detail":"{read_only:false; response_revision:1048; number_of_response:1; }","duration":"167.91547ms","start":"2025-09-29T10:21:38.921368Z","end":"2025-09-29T10:21:39.089284Z","steps":["trace[103671638] 'process raft request'  (duration: 167.512399ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:21:41.128032Z","caller":"traceutil/trace.go:172","msg":"trace[1380742237] transaction","detail":"{read_only:false; response_revision:1059; number_of_response:1; }","duration":"160.629944ms","start":"2025-09-29T10:21:40.967387Z","end":"2025-09-29T10:21:41.128017Z","steps":["trace[1380742237] 'process raft request'  (duration: 160.428456ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:21:52.740363Z","caller":"traceutil/trace.go:172","msg":"trace[100017207] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"122.049264ms","start":"2025-09-29T10:21:52.618297Z","end":"2025-09-29T10:21:52.740347Z","steps":["trace[100017207] 'process raft request'  (duration: 121.808982ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:21:56.234316Z","caller":"traceutil/trace.go:172","msg":"trace[1596468790] linearizableReadLoop","detail":"{readStateIndex:1190; appliedIndex:1190; }","duration":"200.26342ms","start":"2025-09-29T10:21:56.034037Z","end":"2025-09-29T10:21:56.234300Z","steps":["trace[1596468790] 'read index received'  (duration: 200.256637ms)","trace[1596468790] 'applied index is now lower than readState.Index'  (duration: 6.184µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T10:21:56.234915Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"200.854605ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:21:56.235162Z","caller":"traceutil/trace.go:172","msg":"trace[794219373] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses; range_end:; response_count:0; response_revision:1159; }","duration":"201.11834ms","start":"2025-09-29T10:21:56.034033Z","end":"2025-09-29T10:21:56.235151Z","steps":["trace[794219373] 'agreement among raft nodes before linearized reading'  (duration: 200.701253ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:21:56.235298Z","caller":"traceutil/trace.go:172","msg":"trace[1282453769] transaction","detail":"{read_only:false; response_revision:1160; number_of_response:1; }","duration":"273.44806ms","start":"2025-09-29T10:21:55.961839Z","end":"2025-09-29T10:21:56.235287Z","steps":["trace[1282453769] 'process raft request'  (duration: 272.570369ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:23:49.922596Z","caller":"traceutil/trace.go:172","msg":"trace[1297543237] transaction","detail":"{read_only:false; response_revision:1563; number_of_response:1; }","duration":"107.889005ms","start":"2025-09-29T10:23:49.814676Z","end":"2025-09-29T10:23:49.922565Z","steps":["trace[1297543237] 'process raft request'  (duration: 107.763843ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:23:56.906428Z","caller":"traceutil/trace.go:172","msg":"trace[852559153] linearizableReadLoop","detail":"{readStateIndex:1673; appliedIndex:1673; }","duration":"207.27017ms","start":"2025-09-29T10:23:56.699140Z","end":"2025-09-29T10:23:56.906410Z","steps":["trace[852559153] 'read index received'  (duration: 207.264352ms)","trace[852559153] 'applied index is now lower than readState.Index'  (duration: 4.799µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T10:23:56.906582Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"207.425338ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:23:56.906604Z","caller":"traceutil/trace.go:172","msg":"trace[159869457] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1610; }","duration":"207.488053ms","start":"2025-09-29T10:23:56.699111Z","end":"2025-09-29T10:23:56.906599Z","steps":["trace[159869457] 'agreement among raft nodes before linearized reading'  (duration: 207.399273ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T10:23:56.906732Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"168.171419ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-937c6346-84b7-4f57-ba02-2f7990d0e2d0\" limit:1 ","response":"range_response_count:1 size:4572"}
	{"level":"info","ts":"2025-09-29T10:23:56.906788Z","caller":"traceutil/trace.go:172","msg":"trace[1828175108] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-937c6346-84b7-4f57-ba02-2f7990d0e2d0; range_end:; response_count:1; response_revision:1611; }","duration":"168.215755ms","start":"2025-09-29T10:23:56.738542Z","end":"2025-09-29T10:23:56.906758Z","steps":["trace[1828175108] 'agreement among raft nodes before linearized reading'  (duration: 168.108786ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:23:56.906872Z","caller":"traceutil/trace.go:172","msg":"trace[928903816] transaction","detail":"{read_only:false; response_revision:1611; number_of_response:1; }","duration":"363.567544ms","start":"2025-09-29T10:23:56.543297Z","end":"2025-09-29T10:23:56.906865Z","steps":["trace[928903816] 'process raft request'  (duration: 363.245361ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T10:23:56.906973Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.243902ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-09-29T10:23:56.906980Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-29T10:23:56.543275Z","time spent":"363.614208ms","remote":"127.0.0.1:49608","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1603 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-09-29T10:23:56.906992Z","caller":"traceutil/trace.go:172","msg":"trace[679122027] range","detail":"{range_begin:/registry/csinodes; range_end:; response_count:0; response_revision:1611; }","duration":"126.265845ms","start":"2025-09-29T10:23:56.780721Z","end":"2025-09-29T10:23:56.906987Z","steps":["trace[679122027] 'agreement among raft nodes before linearized reading'  (duration: 126.228069ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:30:24.558236Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1836}
	{"level":"info","ts":"2025-09-29T10:30:24.618107Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1836,"took":"58.526815ms","hash":1683385000,"current-db-size-bytes":6262784,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":4145152,"current-db-size-in-use":"4.1 MB"}
	{"level":"info","ts":"2025-09-29T10:30:24.618306Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1683385000,"revision":1836,"compact-revision":-1}
	
	
	==> kernel <==
	 10:32:04 up 12 min,  0 users,  load average: 0.54, 0.57, 0.50
	Linux addons-911532 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [a00a42bfe385199d067828289bf42f54827d8c441368629a7bc1f630b335746e] <==
	I0929 10:25:45.763121       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:26:37.011596       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:26:47.943967       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:27:48.771890       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:27:57.050958       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:28:53.219223       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:29:15.898925       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:29:24.115031       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:29:24.115296       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:29:24.157165       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:29:24.157323       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:29:24.169111       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:29:24.169221       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:29:24.191814       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:29:24.191886       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:29:24.241500       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:29:24.241746       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0929 10:29:25.170133       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0929 10:29:25.240468       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0929 10:29:25.304119       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0929 10:30:04.808718       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:30:25.992519       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 10:30:27.519519       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:31:27.608516       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:31:47.145818       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [a7fd02945411862cbbf762bab42e24df4c87a418df8b35995e7dd8be37796636] <==
	E0929 10:30:07.032764       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:30:07.033963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:30:17.989268       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 10:30:32.989377       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 10:30:43.828161       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:30:43.829428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:30:47.989507       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 10:30:49.934907       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:30:49.936027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:30:55.057957       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:30:55.059038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:31:02.989524       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 10:31:17.990070       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 10:31:32.317455       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:31:32.318535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:31:32.991013       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 10:31:33.079615       1 csi_attacher.go:520] kubernetes.io/csi: Attach timeout after 2m0s [volume=521d0655-9d1e-11f0-94be-8a78fd083ac9; attachment.ID=csi-5aa11060b1f2caad891df213e5be06be4d5f181f67d22fba8ccc8fb977afe970]
	E0929 10:31:33.079835       1 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^521d0655-9d1e-11f0-94be-8a78fd083ac9 podName: nodeName:}" failed. No retries permitted until 2025-09-29 10:31:33.579760468 +0000 UTC m=+670.522495054 (durationBeforeRetry 500ms). Error: AttachVolume.Attach failed for volume "pvc-9b07580b-c182-4f66-9f3d-0c5c46a22029" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^521d0655-9d1e-11f0-94be-8a78fd083ac9") from node "addons-911532" : timed out waiting for external-attacher of hostpath.csi.k8s.io CSI driver to attach volume 521d0655-9d1e-11f0-94be-8a78fd083ac9
	I0929 10:31:33.650647       1 reconciler.go:364] "attacherDetacher.AttachVolume started" logger="persistentvolume-attach-detach-controller" volumeName="kubernetes.io/csi/hostpath.csi.k8s.io^521d0655-9d1e-11f0-94be-8a78fd083ac9" nodeName="addons-911532" scheduledPods=["default/task-pv-pod"]
	E0929 10:31:38.560160       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:31:38.561146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:31:46.352919       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:31:46.353920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:31:47.991669       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 10:32:02.992499       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	
	
	==> kube-proxy [175a117fb6f06a3a250e33b7087fba88b740cfdf629e237f60ae0464b9de4eab] <==
	I0929 10:20:35.986576       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:20:36.189499       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:20:36.189548       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.179"]
	E0929 10:20:36.189623       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:20:36.301867       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 10:20:36.301934       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 10:20:36.301961       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:20:36.326623       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:20:36.327146       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:20:36.327246       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:20:36.336796       1 config.go:200] "Starting service config controller"
	I0929 10:20:36.336830       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:20:36.336848       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:20:36.336851       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:20:36.336861       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:20:36.336866       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:20:36.342731       1 config.go:309] "Starting node config controller"
	I0929 10:20:36.342767       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:20:36.342774       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:20:36.437304       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:20:36.437613       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:20:36.437632       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e0a50327ef6012889c1d102209d8e88d4379ab8db2ce573d6b836416420edd50] <==
	E0929 10:20:26.063633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:20:26.064483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:20:26.064623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 10:20:26.064815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 10:20:26.065104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 10:20:26.069817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 10:20:26.071395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 10:20:26.072119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 10:20:26.073653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 10:20:26.073850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 10:20:26.074029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 10:20:26.883755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 10:20:26.932747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 10:20:26.936951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 10:20:26.973390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 10:20:26.982912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 10:20:27.004100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 10:20:27.067449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 10:20:27.073035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:20:27.168604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 10:20:27.203313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 10:20:27.256704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 10:20:27.286622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 10:20:27.625245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 10:20:29.547277       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 10:31:00 addons-911532 kubelet[1498]: I0929 10:31:00.599721    1498 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 10:31:02 addons-911532 kubelet[1498]: E0929 10:31:02.605435    1498 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c16b0297-3ef5-4961-9f5e-0019acc5ea5f"
	Sep 29 10:31:06 addons-911532 kubelet[1498]: E0929 10:31:06.599620    1498 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="19fbb660-be46-4ddb-af92-da7e55790348"
	Sep 29 10:31:09 addons-911532 kubelet[1498]: E0929 10:31:09.199336    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141869198954398  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:31:09 addons-911532 kubelet[1498]: E0929 10:31:09.199472    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141869198954398  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:31:17 addons-911532 kubelet[1498]: E0929 10:31:17.601327    1498 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c16b0297-3ef5-4961-9f5e-0019acc5ea5f"
	Sep 29 10:31:19 addons-911532 kubelet[1498]: E0929 10:31:19.201927    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141879201547313  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:31:19 addons-911532 kubelet[1498]: E0929 10:31:19.201990    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141879201547313  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:31:21 addons-911532 kubelet[1498]: I0929 10:31:21.598757    1498 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-jh557" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 10:31:29 addons-911532 kubelet[1498]: E0929 10:31:29.206462    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141889206079800  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:31:29 addons-911532 kubelet[1498]: E0929 10:31:29.206505    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141889206079800  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:31:30 addons-911532 kubelet[1498]: E0929 10:31:30.600768    1498 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c16b0297-3ef5-4961-9f5e-0019acc5ea5f"
	Sep 29 10:31:39 addons-911532 kubelet[1498]: E0929 10:31:39.208640    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141899208150648  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:31:39 addons-911532 kubelet[1498]: E0929 10:31:39.208692    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141899208150648  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:31:39 addons-911532 kubelet[1498]: W0929 10:31:39.964487    1498 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Sep 29 10:31:41 addons-911532 kubelet[1498]: E0929 10:31:41.601077    1498 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c16b0297-3ef5-4961-9f5e-0019acc5ea5f"
	Sep 29 10:31:49 addons-911532 kubelet[1498]: E0929 10:31:49.210847    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141909210281151  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:31:49 addons-911532 kubelet[1498]: E0929 10:31:49.210871    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141909210281151  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:31:52 addons-911532 kubelet[1498]: E0929 10:31:52.354493    1498 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 10:31:52 addons-911532 kubelet[1498]: E0929 10:31:52.354585    1498 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 10:31:52 addons-911532 kubelet[1498]: E0929 10:31:52.354673    1498 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(19fbb660-be46-4ddb-af92-da7e55790348): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 10:31:52 addons-911532 kubelet[1498]: E0929 10:31:52.354709    1498 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="19fbb660-be46-4ddb-af92-da7e55790348"
	Sep 29 10:31:56 addons-911532 kubelet[1498]: E0929 10:31:56.602304    1498 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c16b0297-3ef5-4961-9f5e-0019acc5ea5f"
	Sep 29 10:31:59 addons-911532 kubelet[1498]: E0929 10:31:59.214210    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141919213602820  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:31:59 addons-911532 kubelet[1498]: E0929 10:31:59.214234    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141919213602820  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	
	
	==> storage-provisioner [8590713c2981f7e21a94ebe7a67b99f6cd9fe7a5b1d1e09f228f4b011567a991] <==
	W0929 10:31:39.569593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:31:41.573503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:31:41.578798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:31:43.582948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:31:43.590394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:31:45.594316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:31:45.599956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:31:47.604013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:31:47.612139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:31:49.615962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:31:49.624839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:31:51.628829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:31:51.633879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:31:53.637351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:31:53.642865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:31:55.646281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:31:55.654322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:31:57.658122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:31:57.663032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:31:59.666445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:31:59.671793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:32:01.676111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:32:01.684746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:32:03.692805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:32:03.706902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-911532 -n addons-911532
helpers_test.go:269: (dbg) Run:  kubectl --context addons-911532 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-8bg4m ingress-nginx-admission-patch-xljfq
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-911532 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-8bg4m ingress-nginx-admission-patch-xljfq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-911532 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-8bg4m ingress-nginx-admission-patch-xljfq: exit status 1 (93.59405ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-911532/192.168.39.179
	Start Time:       Mon, 29 Sep 2025 10:24:03 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j4bxx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j4bxx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m2s                   default-scheduler  Successfully assigned default/nginx to addons-911532
	  Warning  Failed     2m38s (x3 over 6m27s)  kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    118s (x4 over 8m2s)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     87s (x4 over 6m27s)    kubelet            Error: ErrImagePull
	  Warning  Failed     87s                    kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    9s (x10 over 6m26s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     9s (x10 over 6m26s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-911532/192.168.39.179
	Start Time:       Mon, 29 Sep 2025 10:23:20 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8z2x6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-8z2x6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason              Age                   From                     Message
	  ----     ------              ----                  ----                     -------
	  Normal   Scheduled           8m45s                 default-scheduler        Successfully assigned default/task-pv-pod to addons-911532
	  Warning  Failed              2m7s (x4 over 7m42s)  kubelet                  Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff             59s (x10 over 7m42s)  kubelet                  Back-off pulling image "docker.io/nginx"
	  Warning  Failed              59s (x10 over 7m42s)  kubelet                  Error: ImagePullBackOff
	  Normal   Pulling             44s (x5 over 8m44s)   kubelet                  Pulling image "docker.io/nginx"
	  Warning  FailedAttachVolume  32s                   attachdetach-controller  AttachVolume.Attach failed for volume "pvc-9b07580b-c182-4f66-9f3d-0c5c46a22029" : timed out waiting for external-attacher of hostpath.csi.k8s.io CSI driver to attach volume 521d0655-9d1e-11f0-94be-8a78fd083ac9
	  Warning  Failed              13s (x5 over 7m42s)   kubelet                  Error: ErrImagePull
	  Warning  Failed              13s                   kubelet                  Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g6jzv (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-g6jzv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-8bg4m" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xljfq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-911532 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-8bg4m ingress-nginx-admission-patch-xljfq: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-911532 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-911532 addons disable ingress-dns --alsologtostderr -v=1: (1.063055385s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-911532 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-911532 addons disable ingress --alsologtostderr -v=1: (7.78816395s)
--- FAIL: TestAddons/parallel/Ingress (491.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (389.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0929 10:23:01.674082    7691 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0929 10:23:01.681503    7691 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0929 10:23:01.681523    7691 kapi.go:107] duration metric: took 7.455907ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.464073ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-911532 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-911532 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [19fbb660-be46-4ddb-af92-da7e55790348] Pending
helpers_test.go:352: "task-pv-pod" [19fbb660-be46-4ddb-af92-da7e55790348] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-911532 -n addons-911532
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-09-29 10:29:21.217376226 +0000 UTC m=+585.283452001
addons_test.go:567: (dbg) Run:  kubectl --context addons-911532 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-911532 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-911532/192.168.39.179
Start Time:       Mon, 29 Sep 2025 10:23:20 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.28
IPs:
IP:  10.244.0.28
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP (http-server)
Host Port:      0/TCP (http-server)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8z2x6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-8z2x6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m1s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-911532
Warning  Failed     70s (x3 over 4m58s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     70s (x3 over 4m58s)  kubelet            Error: ErrImagePull
Normal   BackOff    30s (x5 over 4m58s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     30s (x5 over 4m58s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    18s (x4 over 6m)     kubelet            Pulling image "docker.io/nginx"
addons_test.go:567: (dbg) Run:  kubectl --context addons-911532 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-911532 logs task-pv-pod -n default: exit status 1 (70.123957ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-911532 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-911532 -n addons-911532
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-911532 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-911532 logs -n 25: (1.376453552s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-910458                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-910458 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ start   │ -o=json --download-only -p download-only-452531 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                │ download-only-452531 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ delete  │ -p download-only-452531                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-452531 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ delete  │ -p download-only-910458                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-910458 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ delete  │ -p download-only-452531                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-452531 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ start   │ --download-only -p binary-mirror-757361 --alsologtostderr --binary-mirror http://127.0.0.1:43621 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-757361 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │                     │
	│ delete  │ -p binary-mirror-757361                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-757361 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ addons  │ disable dashboard -p addons-911532                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │                     │
	│ addons  │ enable dashboard -p addons-911532                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │                     │
	│ start   │ -p addons-911532 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-911532 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-911532 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ enable headlamp -p addons-911532 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-911532 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-911532 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-911532 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ ip      │ addons-911532 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ addons  │ addons-911532 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ addons  │ addons-911532 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:24 UTC │
	│ addons  │ addons-911532 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │ 29 Sep 25 10:24 UTC │
	│ addons  │ addons-911532 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │ 29 Sep 25 10:24 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-911532                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │ 29 Sep 25 10:24 UTC │
	│ addons  │ addons-911532 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │ 29 Sep 25 10:24 UTC │
	│ addons  │ addons-911532 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:27 UTC │ 29 Sep 25 10:28 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:19:49
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:19:49.657940    8330 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:19:49.658280    8330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:19:49.658293    8330 out.go:374] Setting ErrFile to fd 2...
	I0929 10:19:49.658299    8330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:19:49.658774    8330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
	I0929 10:19:49.659724    8330 out.go:368] Setting JSON to false
	I0929 10:19:49.660569    8330 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":135,"bootTime":1759141055,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:19:49.660646    8330 start.go:140] virtualization: kvm guest
	I0929 10:19:49.662346    8330 out.go:179] * [addons-911532] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:19:49.663847    8330 notify.go:220] Checking for updates...
	I0929 10:19:49.663868    8330 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 10:19:49.665023    8330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:19:49.666170    8330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 10:19:49.667465    8330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 10:19:49.668605    8330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:19:49.669820    8330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:19:49.670997    8330 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:19:49.700388    8330 out.go:179] * Using the kvm2 driver based on user configuration
	I0929 10:19:49.701463    8330 start.go:304] selected driver: kvm2
	I0929 10:19:49.701479    8330 start.go:924] validating driver "kvm2" against <nil>
	I0929 10:19:49.701491    8330 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:19:49.702129    8330 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:19:49.702205    8330 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21657-3816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 10:19:49.715255    8330 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 10:19:49.715283    8330 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21657-3816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 10:19:49.729163    8330 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 10:19:49.729198    8330 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:19:49.729518    8330 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:19:49.729559    8330 cni.go:84] Creating CNI manager for ""
	I0929 10:19:49.729599    8330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 10:19:49.729607    8330 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 10:19:49.729659    8330 start.go:348] cluster config:
	{Name:addons-911532 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-911532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:19:49.729764    8330 iso.go:125] acquiring lock: {Name:mk6893cf08d5f5d64906f89556bbcb1c3b23df2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:19:49.731718    8330 out.go:179] * Starting "addons-911532" primary control-plane node in "addons-911532" cluster
	I0929 10:19:49.732842    8330 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:19:49.732885    8330 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 10:19:49.732892    8330 cache.go:58] Caching tarball of preloaded images
	I0929 10:19:49.732961    8330 preload.go:172] Found /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 10:19:49.732971    8330 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 10:19:49.733271    8330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/config.json ...
	I0929 10:19:49.733296    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/config.json: {Name:mk3b1c31f51191d700bb099fb8f771ac33c82a62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:19:49.733457    8330 start.go:360] acquireMachinesLock for addons-911532: {Name:mk5aa1ba007c5e25969fbfeac9bb0aa5318bfa89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0929 10:19:49.733506    8330 start.go:364] duration metric: took 34.004µs to acquireMachinesLock for "addons-911532"
	I0929 10:19:49.733524    8330 start.go:93] Provisioning new machine with config: &{Name:addons-911532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:addons-911532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 10:19:49.733580    8330 start.go:125] createHost starting for "" (driver="kvm2")
	I0929 10:19:49.735166    8330 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0929 10:19:49.735279    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:19:49.735315    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:19:49.747570    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34227
	I0929 10:19:49.748034    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:19:49.748606    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:19:49.748628    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:19:49.748980    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:19:49.749155    8330 main.go:141] libmachine: (addons-911532) Calling .GetMachineName
	I0929 10:19:49.749278    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:19:49.749427    8330 start.go:159] libmachine.API.Create for "addons-911532" (driver="kvm2")
	I0929 10:19:49.749454    8330 client.go:168] LocalClient.Create starting
	I0929 10:19:49.749497    8330 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem
	I0929 10:19:49.897019    8330 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem
	I0929 10:19:49.971089    8330 main.go:141] libmachine: Running pre-create checks...
	I0929 10:19:49.971109    8330 main.go:141] libmachine: (addons-911532) Calling .PreCreateCheck
	I0929 10:19:49.971568    8330 main.go:141] libmachine: (addons-911532) Calling .GetConfigRaw
	I0929 10:19:49.971999    8330 main.go:141] libmachine: Creating machine...
	I0929 10:19:49.972014    8330 main.go:141] libmachine: (addons-911532) Calling .Create
	I0929 10:19:49.972178    8330 main.go:141] libmachine: (addons-911532) creating domain...
	I0929 10:19:49.972189    8330 main.go:141] libmachine: (addons-911532) creating network...
	I0929 10:19:49.973497    8330 main.go:141] libmachine: (addons-911532) DBG | found existing default network
	I0929 10:19:49.973637    8330 main.go:141] libmachine: (addons-911532) DBG | <network>
	I0929 10:19:49.973653    8330 main.go:141] libmachine: (addons-911532) DBG |   <name>default</name>
	I0929 10:19:49.973661    8330 main.go:141] libmachine: (addons-911532) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0929 10:19:49.973670    8330 main.go:141] libmachine: (addons-911532) DBG |   <forward mode='nat'>
	I0929 10:19:49.973677    8330 main.go:141] libmachine: (addons-911532) DBG |     <nat>
	I0929 10:19:49.973688    8330 main.go:141] libmachine: (addons-911532) DBG |       <port start='1024' end='65535'/>
	I0929 10:19:49.973700    8330 main.go:141] libmachine: (addons-911532) DBG |     </nat>
	I0929 10:19:49.973706    8330 main.go:141] libmachine: (addons-911532) DBG |   </forward>
	I0929 10:19:49.973715    8330 main.go:141] libmachine: (addons-911532) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0929 10:19:49.973722    8330 main.go:141] libmachine: (addons-911532) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0929 10:19:49.973731    8330 main.go:141] libmachine: (addons-911532) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0929 10:19:49.973740    8330 main.go:141] libmachine: (addons-911532) DBG |     <dhcp>
	I0929 10:19:49.973749    8330 main.go:141] libmachine: (addons-911532) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0929 10:19:49.973765    8330 main.go:141] libmachine: (addons-911532) DBG |     </dhcp>
	I0929 10:19:49.973776    8330 main.go:141] libmachine: (addons-911532) DBG |   </ip>
	I0929 10:19:49.973780    8330 main.go:141] libmachine: (addons-911532) DBG | </network>
	I0929 10:19:49.973787    8330 main.go:141] libmachine: (addons-911532) DBG | 
	I0929 10:19:49.974334    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:49.974184    8358 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000200dd0}
	I0929 10:19:49.974373    8330 main.go:141] libmachine: (addons-911532) DBG | defining private network:
	I0929 10:19:49.974397    8330 main.go:141] libmachine: (addons-911532) DBG | 
	I0929 10:19:49.974420    8330 main.go:141] libmachine: (addons-911532) DBG | <network>
	I0929 10:19:49.974439    8330 main.go:141] libmachine: (addons-911532) DBG |   <name>mk-addons-911532</name>
	I0929 10:19:49.974466    8330 main.go:141] libmachine: (addons-911532) DBG |   <dns enable='no'/>
	I0929 10:19:49.974489    8330 main.go:141] libmachine: (addons-911532) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0929 10:19:49.974503    8330 main.go:141] libmachine: (addons-911532) DBG |     <dhcp>
	I0929 10:19:49.974515    8330 main.go:141] libmachine: (addons-911532) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0929 10:19:49.974525    8330 main.go:141] libmachine: (addons-911532) DBG |     </dhcp>
	I0929 10:19:49.974531    8330 main.go:141] libmachine: (addons-911532) DBG |   </ip>
	I0929 10:19:49.974536    8330 main.go:141] libmachine: (addons-911532) DBG | </network>
	I0929 10:19:49.974542    8330 main.go:141] libmachine: (addons-911532) DBG | 
	I0929 10:19:49.980371    8330 main.go:141] libmachine: (addons-911532) DBG | creating private network mk-addons-911532 192.168.39.0/24...
	I0929 10:19:50.045524    8330 main.go:141] libmachine: (addons-911532) DBG | private network mk-addons-911532 192.168.39.0/24 created
	I0929 10:19:50.045754    8330 main.go:141] libmachine: (addons-911532) DBG | <network>
	I0929 10:19:50.045775    8330 main.go:141] libmachine: (addons-911532) DBG |   <name>mk-addons-911532</name>
	I0929 10:19:50.045788    8330 main.go:141] libmachine: (addons-911532) setting up store path in /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532 ...
	I0929 10:19:50.045815    8330 main.go:141] libmachine: (addons-911532) DBG |   <uuid>1948f630-90e3-4c16-adbb-718b17efed7e</uuid>
	I0929 10:19:50.045832    8330 main.go:141] libmachine: (addons-911532) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I0929 10:19:50.045851    8330 main.go:141] libmachine: (addons-911532) building disk image from file:///home/jenkins/minikube-integration/21657-3816/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0929 10:19:50.045876    8330 main.go:141] libmachine: (addons-911532) DBG |   <mac address='52:54:00:30:e5:b4'/>
	I0929 10:19:50.045894    8330 main.go:141] libmachine: (addons-911532) DBG |   <dns enable='no'/>
	I0929 10:19:50.045921    8330 main.go:141] libmachine: (addons-911532) Downloading /home/jenkins/minikube-integration/21657-3816/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21657-3816/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I0929 10:19:50.045936    8330 main.go:141] libmachine: (addons-911532) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0929 10:19:50.045954    8330 main.go:141] libmachine: (addons-911532) DBG |     <dhcp>
	I0929 10:19:50.045966    8330 main.go:141] libmachine: (addons-911532) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0929 10:19:50.045976    8330 main.go:141] libmachine: (addons-911532) DBG |     </dhcp>
	I0929 10:19:50.045985    8330 main.go:141] libmachine: (addons-911532) DBG |   </ip>
	I0929 10:19:50.045994    8330 main.go:141] libmachine: (addons-911532) DBG | </network>
	I0929 10:19:50.046009    8330 main.go:141] libmachine: (addons-911532) DBG | 
	I0929 10:19:50.046032    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:50.045748    8358 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 10:19:50.297023    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:50.296839    8358 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa...
	I0929 10:19:50.440022    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:50.439881    8358 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/addons-911532.rawdisk...
	I0929 10:19:50.440071    8330 main.go:141] libmachine: (addons-911532) DBG | Writing magic tar header
	I0929 10:19:50.440088    8330 main.go:141] libmachine: (addons-911532) DBG | Writing SSH key tar header
	I0929 10:19:50.440542    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:50.440479    8358 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532 ...
	I0929 10:19:50.440591    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532
	I0929 10:19:50.440619    8330 main.go:141] libmachine: (addons-911532) setting executable bit set on /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532 (perms=drwx------)
	I0929 10:19:50.440632    8330 main.go:141] libmachine: (addons-911532) setting executable bit set on /home/jenkins/minikube-integration/21657-3816/.minikube/machines (perms=drwxr-xr-x)
	I0929 10:19:50.440640    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21657-3816/.minikube/machines
	I0929 10:19:50.440665    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 10:19:50.440675    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21657-3816
	I0929 10:19:50.440683    8330 main.go:141] libmachine: (addons-911532) setting executable bit set on /home/jenkins/minikube-integration/21657-3816/.minikube (perms=drwxr-xr-x)
	I0929 10:19:50.440696    8330 main.go:141] libmachine: (addons-911532) setting executable bit set on /home/jenkins/minikube-integration/21657-3816 (perms=drwxrwxr-x)
	I0929 10:19:50.440709    8330 main.go:141] libmachine: (addons-911532) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0929 10:19:50.440718    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0929 10:19:50.440730    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home/jenkins
	I0929 10:19:50.440740    8330 main.go:141] libmachine: (addons-911532) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0929 10:19:50.440750    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home
	I0929 10:19:50.440759    8330 main.go:141] libmachine: (addons-911532) DBG | skipping /home - not owner
	I0929 10:19:50.440766    8330 main.go:141] libmachine: (addons-911532) defining domain...
	I0929 10:19:50.441750    8330 main.go:141] libmachine: (addons-911532) defining domain using XML: 
	I0929 10:19:50.441770    8330 main.go:141] libmachine: (addons-911532) <domain type='kvm'>
	I0929 10:19:50.441785    8330 main.go:141] libmachine: (addons-911532)   <name>addons-911532</name>
	I0929 10:19:50.441795    8330 main.go:141] libmachine: (addons-911532)   <memory unit='MiB'>4096</memory>
	I0929 10:19:50.441807    8330 main.go:141] libmachine: (addons-911532)   <vcpu>2</vcpu>
	I0929 10:19:50.441815    8330 main.go:141] libmachine: (addons-911532)   <features>
	I0929 10:19:50.441823    8330 main.go:141] libmachine: (addons-911532)     <acpi/>
	I0929 10:19:50.441831    8330 main.go:141] libmachine: (addons-911532)     <apic/>
	I0929 10:19:50.441838    8330 main.go:141] libmachine: (addons-911532)     <pae/>
	I0929 10:19:50.441843    8330 main.go:141] libmachine: (addons-911532)   </features>
	I0929 10:19:50.441851    8330 main.go:141] libmachine: (addons-911532)   <cpu mode='host-passthrough'>
	I0929 10:19:50.441858    8330 main.go:141] libmachine: (addons-911532)   </cpu>
	I0929 10:19:50.441866    8330 main.go:141] libmachine: (addons-911532)   <os>
	I0929 10:19:50.441873    8330 main.go:141] libmachine: (addons-911532)     <type>hvm</type>
	I0929 10:19:50.441881    8330 main.go:141] libmachine: (addons-911532)     <boot dev='cdrom'/>
	I0929 10:19:50.441885    8330 main.go:141] libmachine: (addons-911532)     <boot dev='hd'/>
	I0929 10:19:50.441892    8330 main.go:141] libmachine: (addons-911532)     <bootmenu enable='no'/>
	I0929 10:19:50.441896    8330 main.go:141] libmachine: (addons-911532)   </os>
	I0929 10:19:50.441903    8330 main.go:141] libmachine: (addons-911532)   <devices>
	I0929 10:19:50.441907    8330 main.go:141] libmachine: (addons-911532)     <disk type='file' device='cdrom'>
	I0929 10:19:50.441927    8330 main.go:141] libmachine: (addons-911532)       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/boot2docker.iso'/>
	I0929 10:19:50.441934    8330 main.go:141] libmachine: (addons-911532)       <target dev='hdc' bus='scsi'/>
	I0929 10:19:50.441939    8330 main.go:141] libmachine: (addons-911532)       <readonly/>
	I0929 10:19:50.441943    8330 main.go:141] libmachine: (addons-911532)     </disk>
	I0929 10:19:50.441951    8330 main.go:141] libmachine: (addons-911532)     <disk type='file' device='disk'>
	I0929 10:19:50.441959    8330 main.go:141] libmachine: (addons-911532)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0929 10:19:50.441966    8330 main.go:141] libmachine: (addons-911532)       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/addons-911532.rawdisk'/>
	I0929 10:19:50.441973    8330 main.go:141] libmachine: (addons-911532)       <target dev='hda' bus='virtio'/>
	I0929 10:19:50.441978    8330 main.go:141] libmachine: (addons-911532)     </disk>
	I0929 10:19:50.441990    8330 main.go:141] libmachine: (addons-911532)     <interface type='network'>
	I0929 10:19:50.441998    8330 main.go:141] libmachine: (addons-911532)       <source network='mk-addons-911532'/>
	I0929 10:19:50.442004    8330 main.go:141] libmachine: (addons-911532)       <model type='virtio'/>
	I0929 10:19:50.442009    8330 main.go:141] libmachine: (addons-911532)     </interface>
	I0929 10:19:50.442016    8330 main.go:141] libmachine: (addons-911532)     <interface type='network'>
	I0929 10:19:50.442022    8330 main.go:141] libmachine: (addons-911532)       <source network='default'/>
	I0929 10:19:50.442028    8330 main.go:141] libmachine: (addons-911532)       <model type='virtio'/>
	I0929 10:19:50.442033    8330 main.go:141] libmachine: (addons-911532)     </interface>
	I0929 10:19:50.442039    8330 main.go:141] libmachine: (addons-911532)     <serial type='pty'>
	I0929 10:19:50.442044    8330 main.go:141] libmachine: (addons-911532)       <target port='0'/>
	I0929 10:19:50.442050    8330 main.go:141] libmachine: (addons-911532)     </serial>
	I0929 10:19:50.442055    8330 main.go:141] libmachine: (addons-911532)     <console type='pty'>
	I0929 10:19:50.442067    8330 main.go:141] libmachine: (addons-911532)       <target type='serial' port='0'/>
	I0929 10:19:50.442072    8330 main.go:141] libmachine: (addons-911532)     </console>
	I0929 10:19:50.442078    8330 main.go:141] libmachine: (addons-911532)     <rng model='virtio'>
	I0929 10:19:50.442084    8330 main.go:141] libmachine: (addons-911532)       <backend model='random'>/dev/random</backend>
	I0929 10:19:50.442090    8330 main.go:141] libmachine: (addons-911532)     </rng>
	I0929 10:19:50.442094    8330 main.go:141] libmachine: (addons-911532)   </devices>
	I0929 10:19:50.442100    8330 main.go:141] libmachine: (addons-911532) </domain>
	I0929 10:19:50.442106    8330 main.go:141] libmachine: (addons-911532) 
	I0929 10:19:50.449537    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:be:29:87 in network default
	I0929 10:19:50.449973    8330 main.go:141] libmachine: (addons-911532) starting domain...
	I0929 10:19:50.449986    8330 main.go:141] libmachine: (addons-911532) ensuring networks are active...
	I0929 10:19:50.450009    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:50.450701    8330 main.go:141] libmachine: (addons-911532) Ensuring network default is active
	I0929 10:19:50.451007    8330 main.go:141] libmachine: (addons-911532) Ensuring network mk-addons-911532 is active
	I0929 10:19:50.451538    8330 main.go:141] libmachine: (addons-911532) getting domain XML...
	I0929 10:19:50.452379    8330 main.go:141] libmachine: (addons-911532) DBG | starting domain XML:
	I0929 10:19:50.452399    8330 main.go:141] libmachine: (addons-911532) DBG | <domain type='kvm'>
	I0929 10:19:50.452408    8330 main.go:141] libmachine: (addons-911532) DBG |   <name>addons-911532</name>
	I0929 10:19:50.452415    8330 main.go:141] libmachine: (addons-911532) DBG |   <uuid>0c8a2bbd-7687-4c1a-8020-738f402773b8</uuid>
	I0929 10:19:50.452446    8330 main.go:141] libmachine: (addons-911532) DBG |   <memory unit='KiB'>4194304</memory>
	I0929 10:19:50.452469    8330 main.go:141] libmachine: (addons-911532) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I0929 10:19:50.452483    8330 main.go:141] libmachine: (addons-911532) DBG |   <vcpu placement='static'>2</vcpu>
	I0929 10:19:50.452491    8330 main.go:141] libmachine: (addons-911532) DBG |   <os>
	I0929 10:19:50.452498    8330 main.go:141] libmachine: (addons-911532) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0929 10:19:50.452505    8330 main.go:141] libmachine: (addons-911532) DBG |     <boot dev='cdrom'/>
	I0929 10:19:50.452514    8330 main.go:141] libmachine: (addons-911532) DBG |     <boot dev='hd'/>
	I0929 10:19:50.452525    8330 main.go:141] libmachine: (addons-911532) DBG |     <bootmenu enable='no'/>
	I0929 10:19:50.452545    8330 main.go:141] libmachine: (addons-911532) DBG |   </os>
	I0929 10:19:50.452558    8330 main.go:141] libmachine: (addons-911532) DBG |   <features>
	I0929 10:19:50.452564    8330 main.go:141] libmachine: (addons-911532) DBG |     <acpi/>
	I0929 10:19:50.452573    8330 main.go:141] libmachine: (addons-911532) DBG |     <apic/>
	I0929 10:19:50.452589    8330 main.go:141] libmachine: (addons-911532) DBG |     <pae/>
	I0929 10:19:50.452598    8330 main.go:141] libmachine: (addons-911532) DBG |   </features>
	I0929 10:19:50.452605    8330 main.go:141] libmachine: (addons-911532) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0929 10:19:50.452612    8330 main.go:141] libmachine: (addons-911532) DBG |   <clock offset='utc'/>
	I0929 10:19:50.452628    8330 main.go:141] libmachine: (addons-911532) DBG |   <on_poweroff>destroy</on_poweroff>
	I0929 10:19:50.452639    8330 main.go:141] libmachine: (addons-911532) DBG |   <on_reboot>restart</on_reboot>
	I0929 10:19:50.452649    8330 main.go:141] libmachine: (addons-911532) DBG |   <on_crash>destroy</on_crash>
	I0929 10:19:50.452658    8330 main.go:141] libmachine: (addons-911532) DBG |   <devices>
	I0929 10:19:50.452665    8330 main.go:141] libmachine: (addons-911532) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0929 10:19:50.452674    8330 main.go:141] libmachine: (addons-911532) DBG |     <disk type='file' device='cdrom'>
	I0929 10:19:50.452680    8330 main.go:141] libmachine: (addons-911532) DBG |       <driver name='qemu' type='raw'/>
	I0929 10:19:50.452692    8330 main.go:141] libmachine: (addons-911532) DBG |       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/boot2docker.iso'/>
	I0929 10:19:50.452710    8330 main.go:141] libmachine: (addons-911532) DBG |       <target dev='hdc' bus='scsi'/>
	I0929 10:19:50.452726    8330 main.go:141] libmachine: (addons-911532) DBG |       <readonly/>
	I0929 10:19:50.452740    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0929 10:19:50.452748    8330 main.go:141] libmachine: (addons-911532) DBG |     </disk>
	I0929 10:19:50.452760    8330 main.go:141] libmachine: (addons-911532) DBG |     <disk type='file' device='disk'>
	I0929 10:19:50.452768    8330 main.go:141] libmachine: (addons-911532) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0929 10:19:50.452781    8330 main.go:141] libmachine: (addons-911532) DBG |       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/addons-911532.rawdisk'/>
	I0929 10:19:50.452797    8330 main.go:141] libmachine: (addons-911532) DBG |       <target dev='hda' bus='virtio'/>
	I0929 10:19:50.452811    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0929 10:19:50.452820    8330 main.go:141] libmachine: (addons-911532) DBG |     </disk>
	I0929 10:19:50.452832    8330 main.go:141] libmachine: (addons-911532) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0929 10:19:50.452844    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0929 10:19:50.452853    8330 main.go:141] libmachine: (addons-911532) DBG |     </controller>
	I0929 10:19:50.452868    8330 main.go:141] libmachine: (addons-911532) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0929 10:19:50.452882    8330 main.go:141] libmachine: (addons-911532) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0929 10:19:50.452894    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0929 10:19:50.452905    8330 main.go:141] libmachine: (addons-911532) DBG |     </controller>
	I0929 10:19:50.452917    8330 main.go:141] libmachine: (addons-911532) DBG |     <interface type='network'>
	I0929 10:19:50.452928    8330 main.go:141] libmachine: (addons-911532) DBG |       <mac address='52:54:00:96:11:56'/>
	I0929 10:19:50.452937    8330 main.go:141] libmachine: (addons-911532) DBG |       <source network='mk-addons-911532'/>
	I0929 10:19:50.452945    8330 main.go:141] libmachine: (addons-911532) DBG |       <model type='virtio'/>
	I0929 10:19:50.452955    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0929 10:19:50.452975    8330 main.go:141] libmachine: (addons-911532) DBG |     </interface>
	I0929 10:19:50.452983    8330 main.go:141] libmachine: (addons-911532) DBG |     <interface type='network'>
	I0929 10:19:50.452999    8330 main.go:141] libmachine: (addons-911532) DBG |       <mac address='52:54:00:be:29:87'/>
	I0929 10:19:50.453014    8330 main.go:141] libmachine: (addons-911532) DBG |       <source network='default'/>
	I0929 10:19:50.453022    8330 main.go:141] libmachine: (addons-911532) DBG |       <model type='virtio'/>
	I0929 10:19:50.453031    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0929 10:19:50.453042    8330 main.go:141] libmachine: (addons-911532) DBG |     </interface>
	I0929 10:19:50.453053    8330 main.go:141] libmachine: (addons-911532) DBG |     <serial type='pty'>
	I0929 10:19:50.453062    8330 main.go:141] libmachine: (addons-911532) DBG |       <target type='isa-serial' port='0'>
	I0929 10:19:50.453073    8330 main.go:141] libmachine: (addons-911532) DBG |         <model name='isa-serial'/>
	I0929 10:19:50.453081    8330 main.go:141] libmachine: (addons-911532) DBG |       </target>
	I0929 10:19:50.453088    8330 main.go:141] libmachine: (addons-911532) DBG |     </serial>
	I0929 10:19:50.453094    8330 main.go:141] libmachine: (addons-911532) DBG |     <console type='pty'>
	I0929 10:19:50.453106    8330 main.go:141] libmachine: (addons-911532) DBG |       <target type='serial' port='0'/>
	I0929 10:19:50.453114    8330 main.go:141] libmachine: (addons-911532) DBG |     </console>
	I0929 10:19:50.453119    8330 main.go:141] libmachine: (addons-911532) DBG |     <input type='mouse' bus='ps2'/>
	I0929 10:19:50.453131    8330 main.go:141] libmachine: (addons-911532) DBG |     <input type='keyboard' bus='ps2'/>
	I0929 10:19:50.453138    8330 main.go:141] libmachine: (addons-911532) DBG |     <audio id='1' type='none'/>
	I0929 10:19:50.453144    8330 main.go:141] libmachine: (addons-911532) DBG |     <memballoon model='virtio'>
	I0929 10:19:50.453153    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0929 10:19:50.453158    8330 main.go:141] libmachine: (addons-911532) DBG |     </memballoon>
	I0929 10:19:50.453162    8330 main.go:141] libmachine: (addons-911532) DBG |     <rng model='virtio'>
	I0929 10:19:50.453170    8330 main.go:141] libmachine: (addons-911532) DBG |       <backend model='random'>/dev/random</backend>
	I0929 10:19:50.453176    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0929 10:19:50.453193    8330 main.go:141] libmachine: (addons-911532) DBG |     </rng>
	I0929 10:19:50.453213    8330 main.go:141] libmachine: (addons-911532) DBG |   </devices>
	I0929 10:19:50.453227    8330 main.go:141] libmachine: (addons-911532) DBG | </domain>
	I0929 10:19:50.453239    8330 main.go:141] libmachine: (addons-911532) DBG | 
	I0929 10:19:51.804030    8330 main.go:141] libmachine: (addons-911532) waiting for domain to start...
	I0929 10:19:51.805192    8330 main.go:141] libmachine: (addons-911532) domain is now running
	I0929 10:19:51.805217    8330 main.go:141] libmachine: (addons-911532) waiting for IP...
	I0929 10:19:51.805985    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:51.806446    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:51.806469    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:51.806682    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:51.806731    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:51.806690    8358 retry.go:31] will retry after 261.427598ms: waiting for domain to come up
	I0929 10:19:52.070280    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:52.070742    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:52.070767    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:52.070971    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:52.070993    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:52.070958    8358 retry.go:31] will retry after 240.955253ms: waiting for domain to come up
	I0929 10:19:52.313494    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:52.313944    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:52.313967    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:52.314221    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:52.314248    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:52.314183    8358 retry.go:31] will retry after 448.127739ms: waiting for domain to come up
	I0929 10:19:52.763659    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:52.764289    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:52.764319    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:52.764571    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:52.764611    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:52.764572    8358 retry.go:31] will retry after 440.800517ms: waiting for domain to come up
	I0929 10:19:53.207391    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:53.207852    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:53.207875    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:53.208100    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:53.208135    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:53.208089    8358 retry.go:31] will retry after 608.456206ms: waiting for domain to come up
	I0929 10:19:53.817995    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:53.818510    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:53.818534    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:53.818802    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:53.818825    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:53.818782    8358 retry.go:31] will retry after 587.200151ms: waiting for domain to come up
	I0929 10:19:54.407631    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:54.408171    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:54.408193    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:54.408543    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:54.408576    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:54.408497    8358 retry.go:31] will retry after 1.130343319s: waiting for domain to come up
	I0929 10:19:55.540378    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:55.540927    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:55.540953    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:55.541189    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:55.541213    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:55.541166    8358 retry.go:31] will retry after 1.101264298s: waiting for domain to come up
	I0929 10:19:56.643818    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:56.644330    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:56.644369    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:56.644602    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:56.644625    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:56.644570    8358 retry.go:31] will retry after 1.643468675s: waiting for domain to come up
	I0929 10:19:58.290455    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:58.290889    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:58.290912    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:58.291164    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:58.291183    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:58.291128    8358 retry.go:31] will retry after 1.40280966s: waiting for domain to come up
	I0929 10:19:59.695464    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:59.695974    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:59.695992    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:59.696272    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:59.696323    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:59.696265    8358 retry.go:31] will retry after 1.862603319s: waiting for domain to come up
	I0929 10:20:01.561785    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:01.562380    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:20:01.562407    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:20:01.562655    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:20:01.562683    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:20:01.562634    8358 retry.go:31] will retry after 2.941456391s: waiting for domain to come up
	I0929 10:20:04.507942    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:04.508465    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:20:04.508487    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:20:04.508708    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:20:04.508754    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:20:04.508692    8358 retry.go:31] will retry after 3.063009242s: waiting for domain to come up
	I0929 10:20:07.575419    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:07.575975    8330 main.go:141] libmachine: (addons-911532) found domain IP: 192.168.39.179
	I0929 10:20:07.575990    8330 main.go:141] libmachine: (addons-911532) reserving static IP address...
	I0929 10:20:07.575998    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has current primary IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:07.576366    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find host DHCP lease matching {name: "addons-911532", mac: "52:54:00:96:11:56", ip: "192.168.39.179"} in network mk-addons-911532
	I0929 10:20:07.774232    8330 main.go:141] libmachine: (addons-911532) DBG | Getting to WaitForSSH function...
	I0929 10:20:07.774263    8330 main.go:141] libmachine: (addons-911532) reserved static IP address 192.168.39.179 for domain addons-911532
	I0929 10:20:07.774309    8330 main.go:141] libmachine: (addons-911532) waiting for SSH...
	I0929 10:20:07.777412    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:07.777949    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:minikube Clientid:01:52:54:00:96:11:56}
	I0929 10:20:07.777974    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:07.778160    8330 main.go:141] libmachine: (addons-911532) DBG | Using SSH client type: external
	I0929 10:20:07.778178    8330 main.go:141] libmachine: (addons-911532) DBG | Using SSH private key: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa (-rw-------)
	I0929 10:20:07.778240    8330 main.go:141] libmachine: (addons-911532) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0929 10:20:07.778264    8330 main.go:141] libmachine: (addons-911532) DBG | About to run SSH command:
	I0929 10:20:07.778276    8330 main.go:141] libmachine: (addons-911532) DBG | exit 0
	I0929 10:20:07.917138    8330 main.go:141] libmachine: (addons-911532) DBG | SSH cmd err, output: <nil>: 
	I0929 10:20:07.917411    8330 main.go:141] libmachine: (addons-911532) domain creation complete
	I0929 10:20:07.917792    8330 main.go:141] libmachine: (addons-911532) Calling .GetConfigRaw
	I0929 10:20:07.918434    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:07.918664    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:07.918846    8330 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0929 10:20:07.918860    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:07.920305    8330 main.go:141] libmachine: Detecting operating system of created instance...
	I0929 10:20:07.920320    8330 main.go:141] libmachine: Waiting for SSH to be available...
	I0929 10:20:07.920325    8330 main.go:141] libmachine: Getting to WaitForSSH function...
	I0929 10:20:07.920330    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:07.922896    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:07.923256    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:07.923281    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:07.923438    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:07.923635    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:07.923781    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:07.923951    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:07.924122    8330 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:07.924327    8330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0929 10:20:07.924337    8330 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0929 10:20:08.032128    8330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 10:20:08.032150    8330 main.go:141] libmachine: Detecting the provisioner...
	I0929 10:20:08.032158    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.035150    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.035650    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.035676    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.035849    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.036023    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.036162    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.036310    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.036503    8330 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:08.036699    8330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0929 10:20:08.036709    8330 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0929 10:20:08.146139    8330 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0929 10:20:08.146218    8330 main.go:141] libmachine: found compatible host: buildroot
	I0929 10:20:08.146225    8330 main.go:141] libmachine: Provisioning with buildroot...
	I0929 10:20:08.146232    8330 main.go:141] libmachine: (addons-911532) Calling .GetMachineName
	I0929 10:20:08.146517    8330 buildroot.go:166] provisioning hostname "addons-911532"
	I0929 10:20:08.146546    8330 main.go:141] libmachine: (addons-911532) Calling .GetMachineName
	I0929 10:20:08.146724    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.149534    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.149903    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.149931    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.150079    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.150261    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.150452    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.150570    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.150709    8330 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:08.150906    8330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0929 10:20:08.150918    8330 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-911532 && echo "addons-911532" | sudo tee /etc/hostname
	I0929 10:20:08.278974    8330 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-911532
	
	I0929 10:20:08.279001    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.282211    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.282657    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.282689    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.282950    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.283137    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.283318    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.283463    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.283602    8330 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:08.283817    8330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0929 10:20:08.283855    8330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-911532' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-911532/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-911532' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 10:20:08.400849    8330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 10:20:08.400874    8330 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21657-3816/.minikube CaCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21657-3816/.minikube}
	I0929 10:20:08.400909    8330 buildroot.go:174] setting up certificates
	I0929 10:20:08.400922    8330 provision.go:84] configureAuth start
	I0929 10:20:08.400933    8330 main.go:141] libmachine: (addons-911532) Calling .GetMachineName
	I0929 10:20:08.401221    8330 main.go:141] libmachine: (addons-911532) Calling .GetIP
	I0929 10:20:08.404488    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.404861    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.404881    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.405105    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.407451    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.407783    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.407808    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.408007    8330 provision.go:143] copyHostCerts
	I0929 10:20:08.408072    8330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/ca.pem (1082 bytes)
	I0929 10:20:08.408347    8330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/cert.pem (1123 bytes)
	I0929 10:20:08.408478    8330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/key.pem (1679 bytes)
	I0929 10:20:08.408562    8330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca-key.pem org=jenkins.addons-911532 san=[127.0.0.1 192.168.39.179 addons-911532 localhost minikube]
	I0929 10:20:08.457469    8330 provision.go:177] copyRemoteCerts
	I0929 10:20:08.457527    8330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 10:20:08.457548    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.460625    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.460962    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.460991    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.461153    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.461390    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.461509    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.461643    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:08.546790    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 10:20:08.577312    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 10:20:08.607181    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 10:20:08.636055    8330 provision.go:87] duration metric: took 235.1207ms to configureAuth
	I0929 10:20:08.636085    8330 buildroot.go:189] setting minikube options for container-runtime
	I0929 10:20:08.636280    8330 config.go:182] Loaded profile config "addons-911532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:20:08.636388    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.639147    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.639482    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.639525    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.639765    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.639937    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.640129    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.640246    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.640408    8330 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:08.640614    8330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0929 10:20:08.640629    8330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 10:20:08.884944    8330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 10:20:08.884967    8330 main.go:141] libmachine: Checking connection to Docker...
	I0929 10:20:08.884977    8330 main.go:141] libmachine: (addons-911532) Calling .GetURL
	I0929 10:20:08.886395    8330 main.go:141] libmachine: (addons-911532) DBG | using libvirt version 8000000
	I0929 10:20:08.888906    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.889281    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.889309    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.889489    8330 main.go:141] libmachine: Docker is up and running!
	I0929 10:20:08.889503    8330 main.go:141] libmachine: Reticulating splines...
	I0929 10:20:08.889509    8330 client.go:171] duration metric: took 19.140044962s to LocalClient.Create
	I0929 10:20:08.889527    8330 start.go:167] duration metric: took 19.140101533s to libmachine.API.Create "addons-911532"
	I0929 10:20:08.889535    8330 start.go:293] postStartSetup for "addons-911532" (driver="kvm2")
	I0929 10:20:08.889546    8330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 10:20:08.889561    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:08.889787    8330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 10:20:08.889810    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.893400    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.893828    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.893850    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.893987    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.894222    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.894407    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.894549    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:08.979409    8330 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 10:20:08.984274    8330 info.go:137] Remote host: Buildroot 2025.02
	I0929 10:20:08.984296    8330 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-3816/.minikube/addons for local assets ...
	I0929 10:20:08.984377    8330 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-3816/.minikube/files for local assets ...
	I0929 10:20:08.984400    8330 start.go:296] duration metric: took 94.85978ms for postStartSetup
	I0929 10:20:08.984429    8330 main.go:141] libmachine: (addons-911532) Calling .GetConfigRaw
	I0929 10:20:08.985063    8330 main.go:141] libmachine: (addons-911532) Calling .GetIP
	I0929 10:20:08.987970    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.988332    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.988371    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.988631    8330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/config.json ...
	I0929 10:20:08.988817    8330 start.go:128] duration metric: took 19.255225953s to createHost
	I0929 10:20:08.988846    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.991306    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.991862    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.991889    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.992056    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.992222    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.992394    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.992520    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.992681    8330 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:08.992946    8330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0929 10:20:08.992962    8330 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0929 10:20:09.100129    8330 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759141209.059279000
	
	I0929 10:20:09.100152    8330 fix.go:216] guest clock: 1759141209.059279000
	I0929 10:20:09.100159    8330 fix.go:229] Guest: 2025-09-29 10:20:09.059279 +0000 UTC Remote: 2025-09-29 10:20:08.988831556 +0000 UTC m=+19.364626106 (delta=70.447444ms)
	I0929 10:20:09.100191    8330 fix.go:200] guest clock delta is within tolerance: 70.447444ms
	I0929 10:20:09.100196    8330 start.go:83] releasing machines lock for "addons-911532", held for 19.366681656s
	I0929 10:20:09.100216    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:09.100557    8330 main.go:141] libmachine: (addons-911532) Calling .GetIP
	I0929 10:20:09.103690    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:09.104033    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:09.104062    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:09.104246    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:09.104743    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:09.104923    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:09.105046    8330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 10:20:09.105097    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:09.105112    8330 ssh_runner.go:195] Run: cat /version.json
	I0929 10:20:09.105130    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:09.108069    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:09.108119    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:09.108464    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:09.108488    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:09.108512    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:09.108534    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:09.108734    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:09.108749    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:09.108912    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:09.108926    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:09.109101    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:09.109113    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:09.109256    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:09.109260    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:09.216417    8330 ssh_runner.go:195] Run: systemctl --version
	I0929 10:20:09.222846    8330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 10:20:09.384636    8330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0929 10:20:09.391852    8330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0929 10:20:09.391906    8330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 10:20:09.412791    8330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0929 10:20:09.412813    8330 start.go:495] detecting cgroup driver to use...
	I0929 10:20:09.412882    8330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 10:20:09.432417    8330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 10:20:09.448433    8330 docker.go:218] disabling cri-docker service (if available) ...
	I0929 10:20:09.448494    8330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 10:20:09.465964    8330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 10:20:09.481975    8330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 10:20:09.629225    8330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 10:20:09.840833    8330 docker.go:234] disabling docker service ...
	I0929 10:20:09.840898    8330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 10:20:09.858103    8330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 10:20:09.872733    8330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 10:20:10.028160    8330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 10:20:10.170725    8330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 10:20:10.186498    8330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 10:20:10.208790    8330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 10:20:10.208840    8330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.221373    8330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 10:20:10.221427    8330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.233339    8330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.245762    8330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.257848    8330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 10:20:10.270858    8330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.283122    8330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.304068    8330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.316039    8330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 10:20:10.326321    8330 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0929 10:20:10.326388    8330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0929 10:20:10.348550    8330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 10:20:10.361988    8330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:20:10.507746    8330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 10:20:10.612811    8330 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 10:20:10.612899    8330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 10:20:10.618569    8330 start.go:563] Will wait 60s for crictl version
	I0929 10:20:10.618625    8330 ssh_runner.go:195] Run: which crictl
	I0929 10:20:10.622944    8330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 10:20:10.665514    8330 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0929 10:20:10.665614    8330 ssh_runner.go:195] Run: crio --version
	I0929 10:20:10.694916    8330 ssh_runner.go:195] Run: crio --version
	I0929 10:20:10.724814    8330 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0929 10:20:10.726157    8330 main.go:141] libmachine: (addons-911532) Calling .GetIP
	I0929 10:20:10.729133    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:10.729545    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:10.729575    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:10.729788    8330 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0929 10:20:10.734601    8330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:20:10.750745    8330 kubeadm.go:875] updating cluster {Name:addons-911532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-911
532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 10:20:10.750830    8330 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:20:10.750873    8330 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 10:20:10.786965    8330 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0929 10:20:10.787034    8330 ssh_runner.go:195] Run: which lz4
	I0929 10:20:10.791694    8330 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0929 10:20:10.796598    8330 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0929 10:20:10.796640    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0929 10:20:12.287040    8330 crio.go:462] duration metric: took 1.495381435s to copy over tarball
	I0929 10:20:12.287115    8330 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0929 10:20:13.904851    8330 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.617709548s)
	I0929 10:20:13.904878    8330 crio.go:469] duration metric: took 1.617810623s to extract the tarball
	I0929 10:20:13.904887    8330 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0929 10:20:13.946333    8330 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 10:20:13.991640    8330 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 10:20:13.991663    8330 cache_images.go:85] Images are preloaded, skipping loading
	I0929 10:20:13.991671    8330 kubeadm.go:926] updating node { 192.168.39.179 8443 v1.34.0 crio true true} ...
	I0929 10:20:13.991761    8330 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-911532 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-911532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 10:20:13.991839    8330 ssh_runner.go:195] Run: crio config
	I0929 10:20:14.038150    8330 cni.go:84] Creating CNI manager for ""
	I0929 10:20:14.038169    8330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 10:20:14.038180    8330 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 10:20:14.038198    8330 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.179 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-911532 NodeName:addons-911532 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 10:20:14.038300    8330 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-911532"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.179"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.179"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 10:20:14.038381    8330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 10:20:14.053651    8330 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 10:20:14.053724    8330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 10:20:14.068031    8330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0929 10:20:14.092020    8330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 10:20:14.116202    8330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I0929 10:20:14.140056    8330 ssh_runner.go:195] Run: grep 192.168.39.179	control-plane.minikube.internal$ /etc/hosts
	I0929 10:20:14.144733    8330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:20:14.159800    8330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:20:14.314527    8330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:20:14.337683    8330 certs.go:68] Setting up /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532 for IP: 192.168.39.179
	I0929 10:20:14.337707    8330 certs.go:194] generating shared ca certs ...
	I0929 10:20:14.337743    8330 certs.go:226] acquiring lock for ca certs: {Name:mk991a8b4541d4c7b4b7bab2e7dfb0450ec66a3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:14.337913    8330 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21657-3816/.minikube/ca.key
	I0929 10:20:14.828624    8330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt ...
	I0929 10:20:14.828656    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt: {Name:mk605d19c615ec63bb49553d32d16a9968996447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:14.828869    8330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-3816/.minikube/ca.key ...
	I0929 10:20:14.828887    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/ca.key: {Name:mk116fbaf9146e252d64c98b19fb4d5d877a65f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:14.828995    8330 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.key
	I0929 10:20:15.061750    8330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.crt ...
	I0929 10:20:15.061779    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.crt: {Name:mk3eeeaec93a3e580abc1a0f8721c39cfd08ef60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.061960    8330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.key ...
	I0929 10:20:15.061975    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.key: {Name:mkc397709470903133ba0b5a62b9ca66bd0144de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.062076    8330 certs.go:256] generating profile certs ...
	I0929 10:20:15.062154    8330 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.key
	I0929 10:20:15.062173    8330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt with IP's: []
	I0929 10:20:15.253281    8330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt ...
	I0929 10:20:15.253313    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: {Name:mkb6d93d9208f1e65858ef821a0bf2997c10f2f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.253506    8330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.key ...
	I0929 10:20:15.253523    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.key: {Name:mk3162bfdf768dab29342cf9830ff9fd4702cb96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.253628    8330 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.key.bf65b89f
	I0929 10:20:15.253656    8330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.crt.bf65b89f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.179]
	I0929 10:20:15.479023    8330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.crt.bf65b89f ...
	I0929 10:20:15.479053    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.crt.bf65b89f: {Name:mkae8e94bfacd54df10c2599ebed7801d300337d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.479223    8330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.key.bf65b89f ...
	I0929 10:20:15.479241    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.key.bf65b89f: {Name:mk28de5248c1f787c9e307292da7671529b3c8bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.479345    8330 certs.go:381] copying /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.crt.bf65b89f -> /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.crt
	I0929 10:20:15.479457    8330 certs.go:385] copying /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.key.bf65b89f -> /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.key
	I0929 10:20:15.479530    8330 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.key
	I0929 10:20:15.479554    8330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.crt with IP's: []
	I0929 10:20:15.890186    8330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.crt ...
	I0929 10:20:15.890217    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.crt: {Name:mk8d6457a0876ed0180e350f3cff3f286feaeb73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.890408    8330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.key ...
	I0929 10:20:15.890424    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.key: {Name:mk5fa1c5bb7ab27f1723ebd353f821745dcf151a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.890613    8330 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 10:20:15.890663    8330 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem (1082 bytes)
	I0929 10:20:15.890698    8330 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem (1123 bytes)
	I0929 10:20:15.890741    8330 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/key.pem (1679 bytes)
	I0929 10:20:15.891316    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 10:20:15.938903    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 10:20:15.978982    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 10:20:16.009727    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 10:20:16.039344    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 10:20:16.070479    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 10:20:16.101539    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 10:20:16.131091    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 10:20:16.161171    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 10:20:16.190550    8330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 10:20:16.210923    8330 ssh_runner.go:195] Run: openssl version
	I0929 10:20:16.217450    8330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 10:20:16.231199    8330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:20:16.236531    8330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:20:16.236589    8330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:20:16.244248    8330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 10:20:16.258217    8330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 10:20:16.263250    8330 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 10:20:16.263302    8330 kubeadm.go:392] StartCluster: {Name:addons-911532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-911532
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:20:16.263401    8330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 10:20:16.263469    8330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 10:20:16.311031    8330 cri.go:89] found id: ""
	I0929 10:20:16.311136    8330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 10:20:16.324180    8330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 10:20:16.335996    8330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 10:20:16.348491    8330 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 10:20:16.348510    8330 kubeadm.go:157] found existing configuration files:
	
	I0929 10:20:16.348558    8330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 10:20:16.359693    8330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 10:20:16.359754    8330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 10:20:16.371848    8330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 10:20:16.382965    8330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 10:20:16.383055    8330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 10:20:16.395004    8330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 10:20:16.405764    8330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 10:20:16.405833    8330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 10:20:16.417554    8330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 10:20:16.428340    8330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 10:20:16.428405    8330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 10:20:16.439786    8330 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0929 10:20:16.601410    8330 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 10:20:29.233520    8330 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 10:20:29.233611    8330 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 10:20:29.233698    8330 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 10:20:29.233818    8330 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 10:20:29.233926    8330 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 10:20:29.233987    8330 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 10:20:29.236675    8330 out.go:252]   - Generating certificates and keys ...
	I0929 10:20:29.236749    8330 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 10:20:29.236804    8330 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 10:20:29.236891    8330 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 10:20:29.236989    8330 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 10:20:29.237083    8330 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 10:20:29.237156    8330 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 10:20:29.237245    8330 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 10:20:29.237406    8330 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-911532 localhost] and IPs [192.168.39.179 127.0.0.1 ::1]
	I0929 10:20:29.237472    8330 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 10:20:29.237610    8330 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-911532 localhost] and IPs [192.168.39.179 127.0.0.1 ::1]
	I0929 10:20:29.237672    8330 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 10:20:29.237726    8330 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 10:20:29.237792    8330 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 10:20:29.237868    8330 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 10:20:29.237928    8330 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 10:20:29.237983    8330 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 10:20:29.238037    8330 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 10:20:29.238094    8330 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 10:20:29.238141    8330 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 10:20:29.238212    8330 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 10:20:29.238272    8330 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 10:20:29.239488    8330 out.go:252]   - Booting up control plane ...
	I0929 10:20:29.239556    8330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 10:20:29.239621    8330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 10:20:29.239677    8330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 10:20:29.239796    8330 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 10:20:29.239908    8330 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 10:20:29.240017    8330 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 10:20:29.240091    8330 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 10:20:29.240132    8330 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 10:20:29.240245    8330 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 10:20:29.240338    8330 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 10:20:29.240414    8330 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.500993452s
	I0929 10:20:29.240491    8330 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 10:20:29.240576    8330 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.179:8443/livez
	I0929 10:20:29.240647    8330 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 10:20:29.240713    8330 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 10:20:29.240773    8330 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.605979769s
	I0929 10:20:29.240827    8330 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.265600399s
	I0929 10:20:29.240895    8330 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.001411979s
	I0929 10:20:29.241002    8330 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 10:20:29.241131    8330 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 10:20:29.241217    8330 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 10:20:29.241415    8330 kubeadm.go:310] [mark-control-plane] Marking the node addons-911532 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 10:20:29.241473    8330 kubeadm.go:310] [bootstrap-token] Using token: xpmnvs.em3s359nhdig9yyg
	I0929 10:20:29.243962    8330 out.go:252]   - Configuring RBAC rules ...
	I0929 10:20:29.244057    8330 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 10:20:29.244129    8330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 10:20:29.244271    8330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 10:20:29.244454    8330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 10:20:29.244608    8330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 10:20:29.244721    8330 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 10:20:29.244831    8330 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 10:20:29.244870    8330 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 10:20:29.244921    8330 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 10:20:29.244927    8330 kubeadm.go:310] 
	I0929 10:20:29.244982    8330 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 10:20:29.244987    8330 kubeadm.go:310] 
	I0929 10:20:29.245051    8330 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 10:20:29.245057    8330 kubeadm.go:310] 
	I0929 10:20:29.245078    8330 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 10:20:29.245167    8330 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 10:20:29.245249    8330 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 10:20:29.245259    8330 kubeadm.go:310] 
	I0929 10:20:29.245332    8330 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 10:20:29.245343    8330 kubeadm.go:310] 
	I0929 10:20:29.245425    8330 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 10:20:29.245437    8330 kubeadm.go:310] 
	I0929 10:20:29.245517    8330 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 10:20:29.245623    8330 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 10:20:29.245684    8330 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 10:20:29.245691    8330 kubeadm.go:310] 
	I0929 10:20:29.245784    8330 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 10:20:29.245882    8330 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 10:20:29.245889    8330 kubeadm.go:310] 
	I0929 10:20:29.245989    8330 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xpmnvs.em3s359nhdig9yyg \
	I0929 10:20:29.246119    8330 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fdcfa3247e581ebf0f11f1ff8ec879a8ec01cf6ce10faea278bc7fcbbc98f689 \
	I0929 10:20:29.246143    8330 kubeadm.go:310] 	--control-plane 
	I0929 10:20:29.246149    8330 kubeadm.go:310] 
	I0929 10:20:29.246228    8330 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 10:20:29.246239    8330 kubeadm.go:310] 
	I0929 10:20:29.246310    8330 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xpmnvs.em3s359nhdig9yyg \
	I0929 10:20:29.246451    8330 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fdcfa3247e581ebf0f11f1ff8ec879a8ec01cf6ce10faea278bc7fcbbc98f689 
	I0929 10:20:29.246468    8330 cni.go:84] Creating CNI manager for ""
	I0929 10:20:29.246477    8330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 10:20:29.248668    8330 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 10:20:29.249832    8330 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 10:20:29.264165    8330 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 10:20:29.287307    8330 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 10:20:29.287371    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:29.287441    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-911532 minikube.k8s.io/updated_at=2025_09_29T10_20_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c703192fb7638284bed1945941837d6f5d9e8170 minikube.k8s.io/name=addons-911532 minikube.k8s.io/primary=true
	I0929 10:20:29.333982    8330 ops.go:34] apiserver oom_adj: -16
	I0929 10:20:29.443148    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:29.943547    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:30.443943    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:30.944035    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:31.443398    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:31.943338    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:32.443329    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:32.944216    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:33.443626    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:33.943212    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:34.443454    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:34.577904    8330 kubeadm.go:1105] duration metric: took 5.290578825s to wait for elevateKubeSystemPrivileges
	I0929 10:20:34.577946    8330 kubeadm.go:394] duration metric: took 18.314646355s to StartCluster
	I0929 10:20:34.577972    8330 settings.go:142] acquiring lock: {Name:mkbd44ffc9a24198fd299896a4cba1c423a77e63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:34.578089    8330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 10:20:34.578570    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/kubeconfig: {Name:mka4c30ad2429731194076d58cd88072dc744e8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:34.578797    8330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 10:20:34.578808    8330 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 10:20:34.578883    8330 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 10:20:34.578998    8330 config.go:182] Loaded profile config "addons-911532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:20:34.579013    8330 addons.go:69] Setting metrics-server=true in profile "addons-911532"
	I0929 10:20:34.579019    8330 addons.go:69] Setting inspektor-gadget=true in profile "addons-911532"
	I0929 10:20:34.579032    8330 addons.go:238] Setting addon metrics-server=true in "addons-911532"
	I0929 10:20:34.579001    8330 addons.go:69] Setting yakd=true in profile "addons-911532"
	I0929 10:20:34.579051    8330 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-911532"
	I0929 10:20:34.579058    8330 addons.go:69] Setting registry=true in profile "addons-911532"
	I0929 10:20:34.579072    8330 addons.go:69] Setting registry-creds=true in profile "addons-911532"
	I0929 10:20:34.579076    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579083    8330 addons.go:69] Setting ingress=true in profile "addons-911532"
	I0929 10:20:34.579081    8330 addons.go:69] Setting cloud-spanner=true in profile "addons-911532"
	I0929 10:20:34.579094    8330 addons.go:238] Setting addon ingress=true in "addons-911532"
	I0929 10:20:34.579096    8330 addons.go:238] Setting addon registry=true in "addons-911532"
	I0929 10:20:34.579103    8330 addons.go:238] Setting addon cloud-spanner=true in "addons-911532"
	I0929 10:20:34.579073    8330 addons.go:69] Setting default-storageclass=true in profile "addons-911532"
	I0929 10:20:34.579122    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579121    8330 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-911532"
	I0929 10:20:34.579135    8330 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-911532"
	I0929 10:20:34.579139    8330 addons.go:69] Setting ingress-dns=true in profile "addons-911532"
	I0929 10:20:34.579153    8330 addons.go:238] Setting addon ingress-dns=true in "addons-911532"
	I0929 10:20:34.579163    8330 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-911532"
	I0929 10:20:34.579173    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579182    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579066    8330 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-911532"
	I0929 10:20:34.579422    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579042    8330 addons.go:69] Setting storage-provisioner=true in profile "addons-911532"
	I0929 10:20:34.579481    8330 addons.go:238] Setting addon storage-provisioner=true in "addons-911532"
	I0929 10:20:34.579516    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579556    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579584    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579596    8330 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-911532"
	I0929 10:20:34.579608    8330 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-911532"
	I0929 10:20:34.579617    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.579621    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579642    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.579645    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579680    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.579042    8330 addons.go:238] Setting addon inspektor-gadget=true in "addons-911532"
	I0929 10:20:34.579704    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579864    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579866    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579902    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.579927    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.579956    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579976    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.580024    8330 addons.go:69] Setting volcano=true in profile "addons-911532"
	I0929 10:20:34.579130    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.580036    8330 addons.go:238] Setting addon volcano=true in "addons-911532"
	I0929 10:20:34.580046    8330 addons.go:69] Setting volumesnapshots=true in profile "addons-911532"
	I0929 10:20:34.580056    8330 addons.go:238] Setting addon volumesnapshots=true in "addons-911532"
	I0929 10:20:34.579063    8330 addons.go:238] Setting addon yakd=true in "addons-911532"
	I0929 10:20:34.579586    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.580102    8330 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-911532"
	I0929 10:20:34.579127    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.580205    8330 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-911532"
	I0929 10:20:34.579104    8330 addons.go:238] Setting addon registry-creds=true in "addons-911532"
	I0929 10:20:34.580465    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.580663    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.580700    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579074    8330 addons.go:69] Setting gcp-auth=true in profile "addons-911532"
	I0929 10:20:34.580761    8330 mustload.go:65] Loading cluster: addons-911532
	I0929 10:20:34.580485    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.580518    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.581600    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.581630    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.580542    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.580556    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.582054    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.582079    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.582213    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.582242    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.582457    8330 out.go:179] * Verifying Kubernetes components...
	I0929 10:20:34.580566    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.580580    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.580599    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.582793    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.584547    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.584595    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.586549    8330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:20:34.587657    8330 config.go:182] Loaded profile config "addons-911532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:20:34.587871    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.587947    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.588033    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.588105    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.589680    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.589749    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.611209    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33569
	I0929 10:20:34.619982    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39891
	I0929 10:20:34.620045    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33517
	I0929 10:20:34.620051    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0929 10:20:34.619982    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.620679    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.620992    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.621009    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.621801    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.621914    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46753
	I0929 10:20:34.621956    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37655
	I0929 10:20:34.622631    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.622650    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.623029    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.623106    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35273
	I0929 10:20:34.623707    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.623823    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.623840    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.623952    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.623963    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.624510    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.624527    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.624583    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.624625    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.625263    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.625789    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.625829    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.626174    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.626661    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.626678    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.627096    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.627432    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.627595    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.627607    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.627652    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.627682    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.627733    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.627744    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.628166    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.628190    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.628220    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.628314    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.628759    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.628788    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.629020    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.629055    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.631879    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37641
	I0929 10:20:34.632376    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.632705    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46009
	I0929 10:20:34.633030    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.633048    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.633193    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.633230    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.633267    8330 addons.go:238] Setting addon default-storageclass=true in "addons-911532"
	I0929 10:20:34.633652    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.633800    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.634170    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.634207    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.635813    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.635852    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.636152    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.636325    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0929 10:20:34.636872    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.637313    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.637328    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.642530    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.642548    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.642626    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.642679    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35179
	I0929 10:20:34.643872    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.644142    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.644246    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.644288    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.645594    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36547
	I0929 10:20:34.648922    8330 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-911532"
	I0929 10:20:34.649021    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.649433    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.649468    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.648943    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.652866    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.653073    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.653088    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.653480    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34279
	I0929 10:20:34.653596    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.654397    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.654434    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.654714    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38051
	I0929 10:20:34.654720    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.654766    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.654784    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.655230    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.655412    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.655448    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.655888    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.655923    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.656194    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.656228    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.656428    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.657115    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.657140    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.660741    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.661324    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.661373    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.664929    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.665442    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33747
	I0929 10:20:34.665663    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45979
	I0929 10:20:34.666958    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.666976    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.667484    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.667511    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.667663    8330 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 10:20:34.668039    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.668186    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.668825    8330 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 10:20:34.668844    8330 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 10:20:34.668864    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.670363    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38703
	I0929 10:20:34.670492    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45327
	I0929 10:20:34.670589    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.670638    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.670685    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44323
	I0929 10:20:34.670850    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.671069    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.673465    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34519
	I0929 10:20:34.673527    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.674063    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.674096    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.674977    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34045
	I0929 10:20:34.675676    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.676230    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.676248    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.676307    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.676719    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.677275    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.677317    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.677523    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.678840    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.678928    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.678990    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.679041    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.679058    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.679469    8330 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 10:20:34.680842    8330 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:20:34.680869    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 10:20:34.680887    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.682698    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.682719    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
	I0929 10:20:34.682798    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.682814    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.682799    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.682873    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.682971    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.683566    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.683632    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.683639    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.683654    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.683726    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.683774    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.683785    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.683941    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.684015    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.684089    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.684161    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.684441    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.684455    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.684741    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.684802    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.684849    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.684894    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43831
	I0929 10:20:34.685225    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.685265    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.685603    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.685635    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.685757    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.686288    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.686328    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.687002    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.687029    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.690223    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.693652    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.693704    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.698952    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45605
	I0929 10:20:34.698970    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39949
	I0929 10:20:34.698972    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.699009    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.698972    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.699052    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.699072    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.698956    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.699670    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.699705    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.700063    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.700153    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.700166    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.700208    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.700218    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.700345    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.700526    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.701231    8330 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 10:20:34.701911    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.701977    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.702057    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 10:20:34.702426    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.702172    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.702205    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.702855    8330 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:20:34.703378    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 10:20:34.703399    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.704803    8330 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 10:20:34.704895    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 10:20:34.705477    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44235
	I0929 10:20:34.705978    8330 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 10:20:34.705994    8330 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 10:20:34.706011    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.708737    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 10:20:34.709962    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 10:20:34.711332    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 10:20:34.711651    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.711697    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0929 10:20:34.711872    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42395
	I0929 10:20:34.711919    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.712201    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.712421    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.712506    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.712521    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.712998    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.713202    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.713218    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.713266    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.713854    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
	I0929 10:20:34.713974    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.714080    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.714091    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.714089    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 10:20:34.714230    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.715079    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.715142    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.715220    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.715297    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.715368    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.715956    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.716009    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.716125    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.716175    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40095
	I0929 10:20:34.716205    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.716294    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.716343    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36303
	I0929 10:20:34.716378    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.716486    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.716488    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.716500    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 10:20:34.716534    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.716848    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.716857    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.717024    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.717298    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.717928    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.718122    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0929 10:20:34.718584    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.718977    8330 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 10:20:34.719471    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.719488    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.719792    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.719808    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.719952    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 10:20:34.720195    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.719597    8330 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 10:20:34.720392    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.720598    8330 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:20:34.720616    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 10:20:34.720632    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.720636    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.720067    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.720145    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.720684    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.721147    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.721261    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 10:20:34.721272    8330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 10:20:34.721286    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.721295    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.721304    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.721329    8330 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:20:34.721337    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.721343    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 10:20:34.721370    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.721378    8330 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 10:20:34.721386    8330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 10:20:34.721397    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.722081    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.722147    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.722188    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.722501    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.722717    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.722815    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40771
	I0929 10:20:34.723931    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.724477    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.724627    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45741
	I0929 10:20:34.724682    8330 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 10:20:34.725137    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I0929 10:20:34.725214    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.725408    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.725474    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.725712    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.725963    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.725985    8330 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 10:20:34.726200    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.726227    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.726409    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.726429    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.726650    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.726822    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.727082    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.727129    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.727533    8330 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 10:20:34.727533    8330 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 10:20:34.727652    8330 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 10:20:34.727676    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.728686    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.728766    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.729230    8330 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 10:20:34.729245    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 10:20:34.729261    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.730397    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.730781    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.731393    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.731820    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.732216    8330 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 10:20:34.732339    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:34.732658    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:34.732406    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.732428    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.732749    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.732857    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 10:20:34.733003    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:34.733015    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.733085    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:34.733094    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:34.733106    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:34.733113    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:34.733174    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.733327    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.733400    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:34.733408    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	W0929 10:20:34.733499    8330 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0929 10:20:34.733798    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.733801    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.733912    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.734054    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.734644    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.734341    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.734688    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.734709    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.734754    8330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 10:20:34.734762    8330 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 10:20:34.734774    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.735299    8330 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:20:34.735491    8330 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 10:20:34.735536    8330 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 10:20:34.735635    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.735893    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.736417    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.736504    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.736524    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.736551    8330 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 10:20:34.736683    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.736733    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 10:20:34.736746    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.736864    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.737094    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.737173    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.737498    8330 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:20:34.737512    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 10:20:34.737529    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.738046    8330 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:20:34.738231    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.738250    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.739179    8330 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:20:34.739195    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 10:20:34.739209    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.739655    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.740103    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.740604    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.740967    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.740970    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.741030    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.741379    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.741614    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.741632    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.741788    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.742109    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.742129    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.742150    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.742161    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.742421    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.742535    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.742802    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.742930    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.743127    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.743456    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.743674    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.743699    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.743973    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.744132    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.744133    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.744187    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.744304    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.744456    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.744462    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.744601    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.744706    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.744809    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.745109    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.745491    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.745518    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.745796    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.745998    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.746170    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.746303    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.746890    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.747330    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.747407    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.747570    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.747612    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37067
	I0929 10:20:34.747719    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.747882    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.747955    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.748060    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.748397    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.748421    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.748773    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.749012    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.750457    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.752008    8330 out.go:179]   - Using image docker.io/busybox:stable
	I0929 10:20:34.753202    8330 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 10:20:34.754342    8330 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:20:34.754377    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 10:20:34.754395    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.757852    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.758255    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.758330    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.758551    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.758744    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.758881    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.759050    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	W0929 10:20:35.042687    8330 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47666->192.168.39.179:22: read: connection reset by peer
	I0929 10:20:35.042728    8330 retry.go:31] will retry after 227.252154ms: ssh: handshake failed: read tcp 192.168.39.1:47666->192.168.39.179:22: read: connection reset by peer
	W0929 10:20:35.046188    8330 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47680->192.168.39.179:22: read: connection reset by peer
	I0929 10:20:35.046216    8330 retry.go:31] will retry after 146.732464ms: ssh: handshake failed: read tcp 192.168.39.1:47680->192.168.39.179:22: read: connection reset by peer
	I0929 10:20:35.540872    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:20:35.579899    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:20:35.660053    8330 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 10:20:35.660086    8330 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 10:20:35.675711    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 10:20:35.683986    8330 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:35.684010    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 10:20:35.740542    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 10:20:35.740565    8330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 10:20:35.747876    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:20:35.761273    8330 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 10:20:35.761301    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 10:20:35.864047    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:20:35.966173    8330 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.387341194s)
	I0929 10:20:35.966224    8330 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.379651941s)
	I0929 10:20:35.966281    8330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:20:35.966363    8330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 10:20:35.991879    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 10:20:36.019637    8330 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 10:20:36.019659    8330 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 10:20:36.122486    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:20:36.211453    8330 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 10:20:36.211479    8330 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 10:20:36.220363    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:20:36.238690    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:36.284452    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:20:36.301479    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 10:20:36.301501    8330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 10:20:36.312324    8330 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 10:20:36.312347    8330 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 10:20:36.401460    8330 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:20:36.401485    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 10:20:36.408098    8330 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 10:20:36.408119    8330 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 10:20:36.602526    8330 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 10:20:36.602552    8330 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 10:20:36.629597    8330 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:20:36.629620    8330 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 10:20:36.659489    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 10:20:36.659518    8330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 10:20:36.760787    8330 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 10:20:36.760817    8330 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 10:20:36.780734    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:20:36.980282    8330 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:20:36.980312    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 10:20:37.019180    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 10:20:37.019209    8330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 10:20:37.067476    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:20:37.210287    8330 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 10:20:37.210314    8330 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 10:20:37.370170    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 10:20:37.370205    8330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 10:20:37.411611    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:20:37.615958    8330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 10:20:37.615977    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 10:20:37.626251    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 10:20:37.626289    8330 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 10:20:37.851163    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.310253621s)
	I0929 10:20:37.851224    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:37.851237    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:37.851589    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:37.851612    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:37.851627    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:37.851636    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:37.851934    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:37.851969    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:37.851975    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:38.121335    8330 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:20:38.121366    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 10:20:38.153983    8330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 10:20:38.154019    8330 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 10:20:38.462249    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:20:38.490038    8330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 10:20:38.490067    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 10:20:38.882899    8330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 10:20:38.882924    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 10:20:39.175979    8330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:20:39.176000    8330 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 10:20:39.522531    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:20:40.536771    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.956838267s)
	I0929 10:20:40.536814    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.536829    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.536835    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.861093026s)
	I0929 10:20:40.536874    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.536892    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.537112    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.537122    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.537133    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.537139    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.537144    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.537149    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.537151    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.537158    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.539079    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.539085    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.539093    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.539101    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.539082    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.539102    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.645111    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.645134    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.645420    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.645437    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.794330    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.046421969s)
	I0929 10:20:40.794394    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794407    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.794407    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.93033074s)
	I0929 10:20:40.794439    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794453    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.794500    8330 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.828203764s)
	I0929 10:20:40.794545    8330 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.828162665s)
	I0929 10:20:40.794560    8330 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0929 10:20:40.794605    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.80268956s)
	I0929 10:20:40.794635    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794647    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.794795    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.794805    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.794814    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794820    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.794832    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.794834    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.794845    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.794854    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794862    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.794873    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.794895    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.794902    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.794910    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794917    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.794917    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.672242746s)
	I0929 10:20:40.794943    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794952    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.795217    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.795243    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.795265    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.795271    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.795395    8330 node_ready.go:35] waiting up to 6m0s for node "addons-911532" to be "Ready" ...
	I0929 10:20:40.795495    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.795525    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.795533    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.795542    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.795549    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.795622    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.795630    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.795919    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.795972    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.797514    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.797521    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.797532    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.815143    8330 node_ready.go:49] node "addons-911532" is "Ready"
	I0929 10:20:40.815165    8330 node_ready.go:38] duration metric: took 19.750953ms for node "addons-911532" to be "Ready" ...
	I0929 10:20:40.815177    8330 api_server.go:52] waiting for apiserver process to appear ...
	I0929 10:20:40.815221    8330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:20:41.364748    8330 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-911532" context rescaled to 1 replicas
	I0929 10:20:42.085122    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.864720869s)
	I0929 10:20:42.085215    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:42.085224    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:42.085491    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:42.085509    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:42.085519    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:42.085526    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:42.085859    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:42.085876    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:42.085859    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:42.176567    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.937842433s)
	W0929 10:20:42.176609    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:42.176627    8330 retry.go:31] will retry after 344.433489ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:42.229614    8330 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 10:20:42.229647    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:42.233209    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:42.233765    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:42.233790    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:42.234014    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:42.234217    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:42.234390    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:42.234549    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:42.363888    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:42.363918    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:42.364176    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:42.364191    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:42.402322    8330 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 10:20:42.497253    8330 addons.go:238] Setting addon gcp-auth=true in "addons-911532"
	I0929 10:20:42.497305    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:42.497617    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:42.497656    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:42.511982    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34373
	I0929 10:20:42.512604    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:42.513162    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:42.513187    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:42.513517    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:42.514096    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:42.514143    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:42.521475    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:42.527839    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34943
	I0929 10:20:42.528255    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:42.528790    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:42.528815    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:42.529201    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:42.529440    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:42.531322    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:42.531562    8330 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 10:20:42.531583    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:42.534916    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:42.535403    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:42.535429    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:42.535641    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:42.535801    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:42.535982    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:42.536112    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:43.911194    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.130428404s)
	I0929 10:20:43.911250    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911264    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911305    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.843789347s)
	I0929 10:20:43.911370    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911387    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911417    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.626934708s)
	I0929 10:20:43.911442    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911459    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911385    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.499749199s)
	I0929 10:20:43.911505    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911516    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.911518    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:43.911520    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911526    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.911535    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911543    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911569    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:43.911624    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.911642    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.911716    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911726    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911755    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.911766    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.911777    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911784    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911789    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.911796    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.911805    8330 addons.go:479] Verifying addon registry=true in "addons-911532"
	I0929 10:20:43.911889    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:43.911917    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.913725    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.913735    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.913745    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.914016    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:43.914032    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:43.914034    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.914043    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.914046    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.914052    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.914056    8330 addons.go:479] Verifying addon metrics-server=true in "addons-911532"
	I0929 10:20:43.914058    8330 addons.go:479] Verifying addon ingress=true in "addons-911532"
	I0929 10:20:43.914108    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.914456    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.914126    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:43.916045    8330 out.go:179] * Verifying registry addon...
	I0929 10:20:43.916966    8330 out.go:179] * Verifying ingress addon...
	I0929 10:20:43.916970    8330 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-911532 service yakd-dashboard -n yakd-dashboard
	
	I0929 10:20:43.918685    8330 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 10:20:43.919216    8330 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 10:20:43.932029    8330 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 10:20:43.932051    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:43.932389    8330 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 10:20:43.932401    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:44.445321    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:44.455769    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:44.974560    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:44.974637    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:45.197486    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.73519948s)
	W0929 10:20:45.197531    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:20:45.197552    8330 retry.go:31] will retry after 188.758064ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:20:45.197780    8330 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.382549144s)
	I0929 10:20:45.197804    8330 api_server.go:72] duration metric: took 10.618970714s to wait for apiserver process to appear ...
	I0929 10:20:45.197812    8330 api_server.go:88] waiting for apiserver healthz status ...
	I0929 10:20:45.197833    8330 api_server.go:253] Checking apiserver healthz at https://192.168.39.179:8443/healthz ...
	I0929 10:20:45.197777    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.675200772s)
	I0929 10:20:45.197918    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:45.197936    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:45.198196    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:45.198209    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:45.198225    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:45.198240    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:45.198251    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:45.198499    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:45.198512    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:45.198521    8330 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-911532"
	I0929 10:20:45.200264    8330 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 10:20:45.202570    8330 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 10:20:45.239947    8330 api_server.go:279] https://192.168.39.179:8443/healthz returned 200:
	ok
	I0929 10:20:45.262006    8330 api_server.go:141] control plane version: v1.34.0
	I0929 10:20:45.262038    8330 api_server.go:131] duration metric: took 64.218943ms to wait for apiserver health ...
	I0929 10:20:45.262051    8330 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 10:20:45.279433    8330 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 10:20:45.279463    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:45.334344    8330 system_pods.go:59] 20 kube-system pods found
	I0929 10:20:45.334413    8330 system_pods.go:61] "amd-gpu-device-plugin-jh557" [5db58f7c-939d-4f8a-ad56-5e623bd97274] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:20:45.334425    8330 system_pods.go:61] "coredns-66bc5c9577-2lxh5" [f4a50ee5-9d06-48e9-aeec-8e8fedfd92b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:20:45.334435    8330 system_pods.go:61] "coredns-66bc5c9577-kjfp7" [70196c9f-e851-4e0a-9bad-67ee23312de9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:20:45.334444    8330 system_pods.go:61] "csi-hostpath-attacher-0" [b9fd31a0-37e1-4eec-a97f-a060c1a18bea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:20:45.334456    8330 system_pods.go:61] "csi-hostpath-resizer-0" [638e6c12-0662-47eb-8929-2e5ad0475f5e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:20:45.334471    8330 system_pods.go:61] "csi-hostpathplugin-zrj57" [69f029db-1f0a-43b2-9640-cbdc71a7e26d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:20:45.334480    8330 system_pods.go:61] "etcd-addons-911532" [2ce145a3-4923-438d-b404-82561b587638] Running
	I0929 10:20:45.334486    8330 system_pods.go:61] "kube-apiserver-addons-911532" [a51ab0b2-0bff-45cd-be40-63eda67672a3] Running
	I0929 10:20:45.334491    8330 system_pods.go:61] "kube-controller-manager-addons-911532" [17397601-4bd1-4692-8e05-335fc4806674] Running
	I0929 10:20:45.334500    8330 system_pods.go:61] "kube-ingress-dns-minikube" [3a756c7b-7c15-49df-8410-36c37bdf4785] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:20:45.334505    8330 system_pods.go:61] "kube-proxy-zhcch" [abca3b04-811d-4342-831f-4568c9eb2ee7] Running
	I0929 10:20:45.334513    8330 system_pods.go:61] "kube-scheduler-addons-911532" [4d96f119-c772-497f-a863-d6357e0e0e44] Running
	I0929 10:20:45.334517    8330 system_pods.go:61] "metrics-server-85b7d694d7-c25dl" [6e7da679-c6f1-46e2-9b63-41ed0241a079] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:20:45.334528    8330 system_pods.go:61] "nvidia-device-plugin-daemonset-f6jdr" [4ec65e75-eb10-4514-befa-234528f55822] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:20:45.334537    8330 system_pods.go:61] "registry-66898fdd98-jqjcd" [0c88f6a7-9d7a-40eb-a93a-59bc1e285db9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:20:45.334549    8330 system_pods.go:61] "registry-creds-764b6fb674-xbt6z" [0c2222bf-5153-4d50-b96c-0a6faff0930f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:20:45.334559    8330 system_pods.go:61] "registry-proxy-2jwvb" [79fc320c-8be7-4196-9a5d-2c15ae47e503] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:20:45.334565    8330 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bx82z" [9010bb12-b7f9-43a6-85cc-4ea055c57a89] Pending
	I0929 10:20:45.334571    8330 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ldkqf" [b56211c7-445f-47bc-979d-e6fb7ecca920] Pending
	I0929 10:20:45.334578    8330 system_pods.go:61] "storage-provisioner" [03841ce7-2069-4447-8adf-81b1e5233916] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:20:45.334589    8330 system_pods.go:74] duration metric: took 72.532335ms to wait for pod list to return data ...
	I0929 10:20:45.334601    8330 default_sa.go:34] waiting for default service account to be created ...
	I0929 10:20:45.386874    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:20:45.438919    8330 default_sa.go:45] found service account: "default"
	I0929 10:20:45.438959    8330 default_sa.go:55] duration metric: took 104.351561ms for default service account to be created ...
	I0929 10:20:45.438970    8330 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 10:20:45.479205    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:45.479375    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:45.504498    8330 system_pods.go:86] 20 kube-system pods found
	I0929 10:20:45.504542    8330 system_pods.go:89] "amd-gpu-device-plugin-jh557" [5db58f7c-939d-4f8a-ad56-5e623bd97274] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:20:45.504556    8330 system_pods.go:89] "coredns-66bc5c9577-2lxh5" [f4a50ee5-9d06-48e9-aeec-8e8fedfd92b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:20:45.504572    8330 system_pods.go:89] "coredns-66bc5c9577-kjfp7" [70196c9f-e851-4e0a-9bad-67ee23312de9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:20:45.504584    8330 system_pods.go:89] "csi-hostpath-attacher-0" [b9fd31a0-37e1-4eec-a97f-a060c1a18bea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:20:45.504598    8330 system_pods.go:89] "csi-hostpath-resizer-0" [638e6c12-0662-47eb-8929-2e5ad0475f5e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:20:45.504609    8330 system_pods.go:89] "csi-hostpathplugin-zrj57" [69f029db-1f0a-43b2-9640-cbdc71a7e26d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:20:45.504620    8330 system_pods.go:89] "etcd-addons-911532" [2ce145a3-4923-438d-b404-82561b587638] Running
	I0929 10:20:45.504627    8330 system_pods.go:89] "kube-apiserver-addons-911532" [a51ab0b2-0bff-45cd-be40-63eda67672a3] Running
	I0929 10:20:45.504638    8330 system_pods.go:89] "kube-controller-manager-addons-911532" [17397601-4bd1-4692-8e05-335fc4806674] Running
	I0929 10:20:45.504647    8330 system_pods.go:89] "kube-ingress-dns-minikube" [3a756c7b-7c15-49df-8410-36c37bdf4785] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:20:45.504655    8330 system_pods.go:89] "kube-proxy-zhcch" [abca3b04-811d-4342-831f-4568c9eb2ee7] Running
	I0929 10:20:45.504662    8330 system_pods.go:89] "kube-scheduler-addons-911532" [4d96f119-c772-497f-a863-d6357e0e0e44] Running
	I0929 10:20:45.504674    8330 system_pods.go:89] "metrics-server-85b7d694d7-c25dl" [6e7da679-c6f1-46e2-9b63-41ed0241a079] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:20:45.504685    8330 system_pods.go:89] "nvidia-device-plugin-daemonset-f6jdr" [4ec65e75-eb10-4514-befa-234528f55822] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:20:45.504698    8330 system_pods.go:89] "registry-66898fdd98-jqjcd" [0c88f6a7-9d7a-40eb-a93a-59bc1e285db9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:20:45.504712    8330 system_pods.go:89] "registry-creds-764b6fb674-xbt6z" [0c2222bf-5153-4d50-b96c-0a6faff0930f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:20:45.504724    8330 system_pods.go:89] "registry-proxy-2jwvb" [79fc320c-8be7-4196-9a5d-2c15ae47e503] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:20:45.504734    8330 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bx82z" [9010bb12-b7f9-43a6-85cc-4ea055c57a89] Pending
	I0929 10:20:45.504746    8330 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ldkqf" [b56211c7-445f-47bc-979d-e6fb7ecca920] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:20:45.504759    8330 system_pods.go:89] "storage-provisioner" [03841ce7-2069-4447-8adf-81b1e5233916] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:20:45.504773    8330 system_pods.go:126] duration metric: took 65.795363ms to wait for k8s-apps to be running ...
	I0929 10:20:45.504787    8330 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 10:20:45.504845    8330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:20:45.714542    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:45.928522    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:45.929140    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:46.136638    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.615124231s)
	W0929 10:20:46.136687    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:46.136709    8330 retry.go:31] will retry after 424.774106ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:46.136723    8330 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.605137457s)
	I0929 10:20:46.138626    8330 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:20:46.139865    8330 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 10:20:46.140982    8330 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 10:20:46.141003    8330 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 10:20:46.207677    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:46.212782    8330 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 10:20:46.212807    8330 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 10:20:46.366549    8330 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:20:46.366571    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 10:20:46.428820    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:46.428931    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:46.438908    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:20:46.561803    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:46.711871    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:46.927480    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:46.927570    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:47.210898    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:47.425645    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:47.426862    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:47.619932    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.233004041s)
	I0929 10:20:47.619964    8330 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.115094401s)
	I0929 10:20:47.619993    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:47.620010    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:47.620013    8330 system_svc.go:56] duration metric: took 2.115222945s WaitForService to wait for kubelet
	I0929 10:20:47.620026    8330 kubeadm.go:578] duration metric: took 13.041192565s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:20:47.620054    8330 node_conditions.go:102] verifying NodePressure condition ...
	I0929 10:20:47.620300    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:47.620344    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:47.620369    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:47.620383    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:47.620401    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:47.620637    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:47.620655    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:47.627713    8330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0929 10:20:47.627742    8330 node_conditions.go:123] node cpu capacity is 2
	I0929 10:20:47.627760    8330 node_conditions.go:105] duration metric: took 7.699657ms to run NodePressure ...
	I0929 10:20:47.627774    8330 start.go:241] waiting for startup goroutines ...
	I0929 10:20:47.711789    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:47.936879    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:47.936886    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:48.243761    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:48.409409    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.970463476s)
	I0929 10:20:48.409454    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:48.409465    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:48.409848    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:48.409869    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:48.409871    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:48.409880    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:48.409889    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:48.410156    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:48.410172    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:48.411269    8330 addons.go:479] Verifying addon gcp-auth=true in "addons-911532"
	I0929 10:20:48.412822    8330 out.go:179] * Verifying gcp-auth addon...
	I0929 10:20:48.415066    8330 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 10:20:48.435583    8330 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 10:20:48.435609    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:48.444290    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:48.444495    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:48.711086    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:48.926706    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:48.926805    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:48.928639    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:49.215777    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:49.345459    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.783617228s)
	W0929 10:20:49.345502    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:49.345521    8330 retry.go:31] will retry after 771.396332ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:49.427174    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:49.427499    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:49.430561    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:49.718587    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:49.920192    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:49.923406    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:49.929629    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:50.117584    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:50.213086    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:50.424674    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:50.428302    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:50.428402    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:50.711184    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:50.920140    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:50.925731    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:50.928955    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:51.148250    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.030628865s)
	W0929 10:20:51.148302    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:51.148324    8330 retry.go:31] will retry after 576.274213ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:51.211066    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:51.423094    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:51.427282    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:51.429044    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:51.713135    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:51.725183    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:51.924229    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:51.924401    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:51.930896    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:52.209703    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:52.421865    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:52.425402    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:52.428630    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:52.716412    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:52.924295    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:52.930265    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:52.930335    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:52.936143    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.210924841s)
	W0929 10:20:52.936185    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:52.936205    8330 retry.go:31] will retry after 1.374220476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:53.207601    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:53.421623    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:53.424423    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:53.425168    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:53.716959    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:53.924543    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:53.924591    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:53.924737    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:54.206885    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:54.311018    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:54.419619    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:54.424155    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:54.425928    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:54.711437    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:54.921635    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:54.923109    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:54.923875    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:55.207886    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:55.357008    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.045956607s)
	W0929 10:20:55.357041    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:55.357056    8330 retry.go:31] will retry after 2.584738248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:55.419277    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:55.423271    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:55.425958    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:55.771885    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:55.922759    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:55.925311    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:55.926888    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:56.286209    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:56.421963    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:56.425255    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:56.427805    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:56.711210    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:56.919760    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:56.923081    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:56.925860    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:57.208042    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:57.421946    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:57.425265    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:57.425867    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:57.707061    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:57.929800    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:57.930205    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:57.931973    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:57.942181    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:58.207102    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:58.423712    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:58.423755    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:58.427125    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:58.715894    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:58.918954    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:58.921183    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:58.923721    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:59.059080    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.116858718s)
	W0929 10:20:59.059141    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:59.059166    8330 retry.go:31] will retry after 1.942151479s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:59.209232    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:59.417948    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:59.429985    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:59.430010    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:00.130362    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:00.130976    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:00.132182    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:00.132787    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:00.228828    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:00.419020    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:00.421809    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:00.424680    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:00.709229    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:00.927517    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:00.928518    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:00.928523    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:01.001724    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:01.208275    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:01.419888    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:01.428910    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:01.429180    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:01.708863    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:01.920044    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:01.923338    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:01.926834    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 10:21:01.985595    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:01.985631    8330 retry.go:31] will retry after 3.874793998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:02.207338    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:02.419005    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:02.423832    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:02.425188    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:02.710318    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:02.919221    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:02.922831    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:02.925818    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:03.211916    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:03.421799    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:03.423873    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:03.425858    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:03.707940    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:03.918761    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:03.924771    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:03.925496    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:04.208373    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:04.427530    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:04.427562    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:04.429185    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:04.711395    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:04.918946    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:04.922890    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:04.925419    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:05.207717    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:05.425588    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:05.426139    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:05.428064    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:05.709966    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:05.861215    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:05.919835    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:05.925204    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:05.925220    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:06.512873    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:06.512876    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:06.512941    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:06.513032    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:06.712945    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:06.919940    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:06.927065    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:06.928484    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:07.092306    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.231046214s)
	W0929 10:21:07.092346    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:07.092387    8330 retry.go:31] will retry after 5.851261749s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:07.210508    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:07.421136    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:07.424149    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:07.424367    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:07.709771    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:07.920164    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:07.925061    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:07.928279    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:08.220428    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:08.419698    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:08.423421    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:08.427645    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:08.714820    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:08.919380    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:08.924174    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:08.926180    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:09.210300    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:09.418857    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:09.422339    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:09.423046    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:09.711312    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:09.920056    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:09.925490    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:09.925515    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:10.207095    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:10.425993    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:10.426301    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:10.426888    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:10.708041    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:10.921163    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:10.923488    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:10.925261    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:11.211024    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:11.422876    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:11.426400    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:11.428603    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:11.709665    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:11.919412    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:11.925463    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:11.929002    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:12.209928    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:12.420018    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:12.424532    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:12.425138    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:12.710157    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:12.920343    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:12.925416    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:12.926144    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:12.944295    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:13.208230    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:13.420309    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:13.424729    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:13.425970    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:13.710892    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:13.844128    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:13.844162    8330 retry.go:31] will retry after 11.364763944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:13.918763    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:13.922860    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:13.923485    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:14.206401    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:14.418165    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:14.425970    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:14.426096    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:14.933764    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:14.937462    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:14.937474    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:14.937812    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:15.208057    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:15.418646    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:15.425269    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:15.425769    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:15.993595    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:15.997320    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:15.997530    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:15.997548    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:16.206772    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:16.422583    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:16.424335    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:16.426227    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:16.708097    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:16.921247    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:16.923984    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:16.925900    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:17.210604    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:17.419727    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:17.428991    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:17.429113    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:17.713728    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:17.929841    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:17.930573    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:17.933149    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:18.208428    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:18.420222    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:18.424398    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:18.424564    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:18.711774    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:18.918936    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:18.922240    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:18.923709    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:19.207800    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:19.419045    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:19.422805    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:19.422969    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:19.705451    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:19.918694    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:19.923618    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:19.924430    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:20.207194    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:20.424041    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:20.432156    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:20.434202    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:20.713518    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:20.921792    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:20.927184    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:20.927815    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:21.207457    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:21.418704    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:21.422991    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:21.425131    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:21.708372    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:21.924974    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:21.925102    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:21.925333    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:22.208676    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:22.418579    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:22.422645    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:22.424686    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:22.709484    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:22.926015    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:22.927557    8330 kapi.go:107] duration metric: took 39.008871236s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 10:21:22.929226    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:23.209576    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:23.425205    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:23.428082    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:23.714593    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:23.920363    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:23.924951    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:24.207552    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:24.420112    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:24.424479    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:24.707639    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:24.922839    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:24.923981    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:25.209524    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:25.391829    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:25.419769    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:25.423811    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:25.709920    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:25.919838    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:25.922426    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:26.207779    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:26.300301    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.090742353s)
	W0929 10:21:26.300347    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:26.300372    8330 retry.go:31] will retry after 12.261050049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:26.418609    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:26.425516    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:26.709030    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:26.920490    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:26.923303    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:27.210832    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:27.419571    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:27.423843    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:27.717343    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:27.920068    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:27.929499    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:28.213205    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:28.420745    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:28.425514    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:28.715069    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:28.919315    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:28.924075    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:29.209126    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:29.418285    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:29.425171    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:29.722341    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:29.919736    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:29.924941    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:30.207130    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:30.421800    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:30.422894    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:30.712262    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:30.919477    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:30.922148    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:31.208448    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:31.418793    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:31.422244    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:31.711448    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:31.921287    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:31.923795    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:32.209904    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:32.419914    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:32.422336    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:32.711037    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:32.920967    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:32.928515    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:33.207431    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:33.419316    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:33.422381    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:33.709295    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:33.924149    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:33.928383    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:34.208000    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:34.428340    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:34.431876    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:34.709426    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:34.920188    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:34.924270    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:35.207181    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:35.418439    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:35.423100    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:35.707578    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:35.937088    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:35.939327    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:36.208907    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:36.420989    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:36.423616    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:36.708309    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:36.919632    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:36.924273    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:37.207435    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:37.419671    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:37.423102    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:37.783791    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:37.919989    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:37.924314    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:38.210022    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:38.420054    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:38.431837    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:38.562020    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:38.713780    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:38.923654    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:39.097166    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:39.208499    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:39.429072    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:39.429738    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:39.711870    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:39.726897    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.164840561s)
	W0929 10:21:39.726947    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:39.726967    8330 retry.go:31] will retry after 11.307676359s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:39.923119    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:39.930020    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:40.210041    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:40.420416    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:40.423961    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:40.709983    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:40.918532    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:40.921906    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:41.211550    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:41.419901    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:41.421841    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:41.710969    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:41.918815    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:41.923114    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:42.210789    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:42.421257    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:42.423834    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:42.711332    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:42.919390    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:42.923203    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:43.209065    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:43.418434    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:43.425216    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:43.710063    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:43.917640    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:43.922545    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:44.205527    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:44.418369    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:44.422405    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:44.712591    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:44.925166    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:44.926743    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:45.214074    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:45.418599    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:45.422428    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:45.713883    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:45.920464    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:45.923397    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:46.207761    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:46.424770    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:46.430331    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:46.708102    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:46.928807    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:46.930451    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:47.205481    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:47.418566    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:47.425398    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:47.713263    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:47.919750    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:47.923524    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:48.206758    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:48.419899    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:48.421913    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:48.711173    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:48.923285    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:48.923314    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:49.208056    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:49.419528    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:49.423287    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:49.711515    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:49.924180    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:49.925537    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:50.212106    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:50.419682    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:50.423313    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:50.716590    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:50.919524    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:50.922669    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:51.034797    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:51.209977    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:51.418761    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:51.424479    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:51.712918    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:51.923780    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:51.926533    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:52.208987    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:52.265550    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.230718165s)
	W0929 10:21:52.265592    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:52.265613    8330 retry.go:31] will retry after 29.631524393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:52.428241    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:52.428344    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:52.749549    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:52.921742    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:52.928462    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:53.207817    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:53.419516    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:53.423773    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:53.711799    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:53.920857    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:53.925608    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:54.206121    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:54.419654    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:54.424065    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:54.715431    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:54.920151    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:54.925741    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:55.212980    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:55.419636    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:55.423024    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:55.713534    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:55.925668    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:55.934020    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:56.245122    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:56.419044    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:56.422805    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:56.708253    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:56.922688    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:56.922921    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:57.212695    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:57.430279    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:57.435265    8330 kapi.go:107] duration metric: took 1m13.516044822s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 10:21:57.708402    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:57.924317    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:58.210469    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:58.418928    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:58.712217    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:58.918879    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:59.210802    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:59.421325    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:59.707536    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:59.923138    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:00.208005    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:00.419250    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:00.708379    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:00.918693    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:01.206545    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:01.418717    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:01.707897    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:01.924458    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:02.205991    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:02.419531    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:02.707091    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:02.918959    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:03.207504    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:03.419459    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:03.707093    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:03.919081    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:04.207001    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:04.418468    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:04.707785    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:04.918993    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:05.207795    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:05.418672    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:05.706790    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:05.920088    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:06.207438    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:06.418671    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:06.705954    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:06.919275    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:07.206855    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:07.418730    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:07.706264    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:07.918117    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:08.206783    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:08.426939    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:08.710678    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:08.918698    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:09.206327    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:09.418553    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:09.707129    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:09.918195    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:10.207272    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:10.418565    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:10.707124    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:10.919764    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:11.206241    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:11.418797    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:11.706944    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:11.919689    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:12.207328    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:12.418983    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:12.706788    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:12.919311    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:13.206761    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:13.419370    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:13.712805    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:13.919513    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:14.206504    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:14.418758    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:14.706621    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:14.918962    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:15.207334    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:15.419169    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:15.708290    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:15.918738    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:16.206832    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:16.419219    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:16.707913    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:16.919338    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:17.207062    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:17.418184    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:17.707167    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:17.918891    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:18.207006    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:18.418163    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:18.707075    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:18.919925    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:19.206556    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:19.418550    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:19.713091    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:19.920930    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:20.213277    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:20.421532    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:20.714653    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:20.919900    8330 kapi.go:107] duration metric: took 1m32.50483081s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 10:22:20.922981    8330 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-911532 cluster.
	I0929 10:22:20.924653    8330 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 10:22:20.926061    8330 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 10:22:21.207013    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:21.714545    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:21.897772    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:22:22.206398    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:22:22.599960    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:22:22.600034    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:22:22.600048    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:22:22.600335    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:22:22.600369    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:22:22.600380    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:22:22.600381    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:22:22.600387    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:22:22.600626    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:22:22.600645    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:22:22.600652    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	W0929 10:22:22.600742    8330 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 10:22:22.710659    8330 kapi.go:107] duration metric: took 1m37.508081362s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 10:22:22.712652    8330 out.go:179] * Enabled addons: amd-gpu-device-plugin, ingress-dns, default-storageclass, cloud-spanner, storage-provisioner, nvidia-device-plugin, registry-creds, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0929 10:22:22.713925    8330 addons.go:514] duration metric: took 1m48.135056911s for enable addons: enabled=[amd-gpu-device-plugin ingress-dns default-storageclass cloud-spanner storage-provisioner nvidia-device-plugin registry-creds storage-provisioner-rancher metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0929 10:22:22.713972    8330 start.go:246] waiting for cluster config update ...
	I0929 10:22:22.713998    8330 start.go:255] writing updated cluster config ...
	I0929 10:22:22.714320    8330 ssh_runner.go:195] Run: rm -f paused
	I0929 10:22:22.723573    8330 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:22:22.726685    8330 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2lxh5" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:22.731909    8330 pod_ready.go:94] pod "coredns-66bc5c9577-2lxh5" is "Ready"
	I0929 10:22:22.731936    8330 pod_ready.go:86] duration metric: took 5.225628ms for pod "coredns-66bc5c9577-2lxh5" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:22.733644    8330 pod_ready.go:83] waiting for pod "etcd-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:22.738810    8330 pod_ready.go:94] pod "etcd-addons-911532" is "Ready"
	I0929 10:22:22.738834    8330 pod_ready.go:86] duration metric: took 5.173944ms for pod "etcd-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:22.741797    8330 pod_ready.go:83] waiting for pod "kube-apiserver-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:22.754573    8330 pod_ready.go:94] pod "kube-apiserver-addons-911532" is "Ready"
	I0929 10:22:22.754598    8330 pod_ready.go:86] duration metric: took 12.780428ms for pod "kube-apiserver-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:22.758796    8330 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:23.128329    8330 pod_ready.go:94] pod "kube-controller-manager-addons-911532" is "Ready"
	I0929 10:22:23.128371    8330 pod_ready.go:86] duration metric: took 369.549352ms for pod "kube-controller-manager-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:23.328006    8330 pod_ready.go:83] waiting for pod "kube-proxy-zhcch" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:23.728722    8330 pod_ready.go:94] pod "kube-proxy-zhcch" is "Ready"
	I0929 10:22:23.728750    8330 pod_ready.go:86] duration metric: took 400.712378ms for pod "kube-proxy-zhcch" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:23.928748    8330 pod_ready.go:83] waiting for pod "kube-scheduler-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:24.327749    8330 pod_ready.go:94] pod "kube-scheduler-addons-911532" is "Ready"
	I0929 10:22:24.327772    8330 pod_ready.go:86] duration metric: took 399.002764ms for pod "kube-scheduler-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:24.327782    8330 pod_ready.go:40] duration metric: took 1.604186731s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:22:24.369933    8330 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 10:22:24.371860    8330 out.go:179] * Done! kubectl is now configured to use "addons-911532" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.159798796Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759141762159774234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508783,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b43832c-fd57-4c38-8bea-1d2120513e64 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.160946466Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21311605-4ae1-4d04-8748-8185253fecde name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.161127972Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21311605-4ae1-4d04-8748-8185253fecde name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.161899983Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd2da61f9111a8f172a910334b72c950aad9cf7fcf0d041300bde9676dc9c4b5,PodSandboxId:760f3f111a462fe45783435331c2e5be1da2a299dca6f398620a88efd67623a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759141346666364450,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 50aa0ab4-8b35-4c2d-a178-4efae92e01df,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86299903225c275c16ba4ee1d779f033ab987579d4ac6422c19f2fd060e8a726,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759141341619672301,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d23b4a0ef79c2464e404d975c0d87785de3d7af5c843a051389a716ddc67865,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759141318540861603,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31c1763f6da5357250e1228bab85cc3d750958f66a3a5b7fd832b25bb0ff81c,PodSandboxId:03bb444700e14c181119a621393f5798c192136c811b6f3386b4b5152713ae09,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759141316748980793,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-vttt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2aad62c9-1c19-48f5-8b3c-05a46b75e030,},Annotations:map[string]s
tring{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7dbc3a7ea7e456bf87d8426e18bc6eb1ad812d9efe8200d57fbf61c73a4d171e,PodSandboxId:6c52aed8c7fa63e3ca1db928ef45fc317c5c67533ca3212d1a21f5869230c6fb,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef
218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759141312626930590,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xljfq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c6e265ac-ca21-4ddc-9600-9f5c7a60fe39,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af76a866d9f7161cb48ae968bea8d7c06363958b0000b7c8b685193619ae39f8,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153ae
dc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759141309208153597,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ca93f1803439bb8d7c0ee31afbb42e13ee5031c7de1fabe02a09494bae80ad5,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759141308069570915,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da4833f4415db4921306465d2fb4f126ca430c3d18c4a89eaa8f20e786ba8bb,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-r
egistrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759141306414514921,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:988aa6a5e8a50931ef09ec1acd19e3ac911593b645c53bf4003da182b1674dae,PodSandboxId:4e8a339701c1f8aa4201a090399d4b949ead09ce62cee98adb8df3a0e096602a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image
:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759141304799810576,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-bx82z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9010bb12-b7f9-43a6-85cc-4ea055c57a89,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b80e3a78fd38f1f51a6eefd8c4513909edb9b1053d3efaaee1ac3da4185108ae,PodSandboxId:a8bffbd0b48947ff0ac98962f5c658510cb0728c5e1fbf86c2847acb0688fbe6,Met
adata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759141304667906054,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-ldkqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56211c7-445f-47bc-979d-e6fb7ecca920,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1184f2460f2693ea6f8a8cec74a31ec4b065b23d8b9efdcaf7d9eaca4bf56b99,PodSand
boxId:26d005e1ee4992562de8fb92648009c0498759026fcf684e17b020f2022f85a0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759141302712950200,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8bg4m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 67e735e2-cc42-4d83-8149-dff4c064e226,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd5b676567c158363a1ee8f2bc3d6f9fa
a37e1e1c5769465c497759421eb837,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759141302581475900,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5d31e488abc87f72cee5c4a8e47a04bc935ae66848e742542705ec4ec98f5a,PodSandboxId:580026dcf573a1a642de0bba5f6189c52a03840599ea1cd5c05bc56a2842f167,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759141301215828552,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638e6c12-0662-47eb-8929-2e5ad0475f5e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524ce5f57761b95f68bef1a66bd35da700f6d7866c3217ac224b7711c93a6513,PodSandboxId:40500d85e8ee6bf1057285eeaa0ed2210f174216460e4d2049f944936f3d9504,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759141299428402112,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9fd31a0-37e1-4eec-a97f-a060c1a18bea,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65010026ccf4779ffbbf5a0d1b948ad224d2a7e064b4ef90af3448ede06a9ff,PodSandboxId:c415564a01e1fab92da8edae2e8824202bc486f37754027ab09d33eedd155c44,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759141286789793950,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-tp4c9,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: b33b4eee-87ed-427c-97fe-684dc1a39dc1,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb1fb889a566b019d028c434fcd1b749993ad201323e79f97aab274dfc347ce,PodSandboxId:6a9b5cb08e2bc5e57d63c8c6db0268901431aa3da3ac3e7f79e5bf4d64c54062,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759141279263661342,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a756c7b-7c15-49df-8410-36c37bdf4785,},Annotations:map[string]string{io.kubern
etes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b6f4ec2f78e909b787cbcfadf86a5962d851f2159dd1536bc864bb4c146942a,PodSandboxId:a5ffe00771c3b3619e024c11d22b51c4f3587f4c5bde7d6222f7c2b905b30476,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759141244627788814,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-jh557,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db58f7c-939d-4f8a-ad56-5e623bd97274,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8590713c2981f7e21a94ebe7a67b99f6cd9fe7a5b1d1e09f228f4b011567a991,PodSandboxId:38c60c0820a0d6aff995e82d2cefab3191781caeb135c427d83d8b51d8fd6bc8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759141243642108766,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 03841ce7-2069-4447-8adf-81b1e5233916,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c5c0be5e893e6cb715346a881e803fa92dd601e9a2829b7d1f07ac26f7787a,PodSandboxId:b478e3ec972282315c8ae9a1f15a19686b00bad35c1fddad651c6936db1c8618,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759141235709566509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lxh5,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: f4a50ee5-9d06-48e9-aeec-8e8fedfd92b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175a117fb6f06a3a250e33b7087fba88b740cfdf629e237f60ae0464b9de4eab,PodSandboxId:0d650e4b5f405a8659aec95c9a511629a431c4a60df6ab8393ac1713b86a6959,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759141235212839995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zhcch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abca3b04-811d-4342-831f-4568c9eb2ee7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a50327ef6012889c1d102209d8e88d4379ab8db2ce573d6b836416420edd50,PodSandboxId:04eeebd713634e07907eafd3a8303efc398fb4212e3caf61dddeace9c3777bf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759141222841471087,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edab1ff75c1cd7a0642fffd0b21cd736,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6dbae6113baa53e9504ec93e91af4dc56681d82f26ff33230ebb0ec68e7651,PodSandboxId:f208189bae6ea8042ad1470a0aa5d502dcf417de6417ddc74cbf1d8eb5ea4039,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa
46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759141222881788024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb644a85a1a2dd20a9929f14a1844358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7fd02945411862cbbf762bab42e24df4c87a418df8b35995e7dd8be37796636,PodSandboxId:2ab362827edd044925fd101b4d222362ad65e480d8d0f8a6f9691ad69dab263e,Metadata:&Cont
ainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759141222851945611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f152e69a7a65e5947151db70e65d9f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00a42bfe385199d06782828
9bf42f54827d8c441368629a7bc1f630b335746e,PodSandboxId:4232352893b52fd8c9e6c7c3bbbab8d9a22c6dab5d90a4f5240097504f8391e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759141222827263618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf0001919057aab7c9bba4425845358c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21311605-4ae1-4d04-8748-8185253fecde name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.205003534Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=28dcc305-8b9f-4d44-abde-37fc7a61fc9b name=/runtime.v1.RuntimeService/Version
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.205288471Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28dcc305-8b9f-4d44-abde-37fc7a61fc9b name=/runtime.v1.RuntimeService/Version
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.206700547Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b2b60c8b-a7e2-4012-a23f-b860b17b9c0b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.208407160Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759141762208383273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508783,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b2b60c8b-a7e2-4012-a23f-b860b17b9c0b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.209248714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e904e7cd-71f8-4b9a-a0ef-c1814152352b name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.209508233Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e904e7cd-71f8-4b9a-a0ef-c1814152352b name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.210776352Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd2da61f9111a8f172a910334b72c950aad9cf7fcf0d041300bde9676dc9c4b5,PodSandboxId:760f3f111a462fe45783435331c2e5be1da2a299dca6f398620a88efd67623a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759141346666364450,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 50aa0ab4-8b35-4c2d-a178-4efae92e01df,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86299903225c275c16ba4ee1d779f033ab987579d4ac6422c19f2fd060e8a726,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759141341619672301,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d23b4a0ef79c2464e404d975c0d87785de3d7af5c843a051389a716ddc67865,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759141318540861603,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31c1763f6da5357250e1228bab85cc3d750958f66a3a5b7fd832b25bb0ff81c,PodSandboxId:03bb444700e14c181119a621393f5798c192136c811b6f3386b4b5152713ae09,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759141316748980793,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-vttt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2aad62c9-1c19-48f5-8b3c-05a46b75e030,},Annotations:map[string]s
tring{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7dbc3a7ea7e456bf87d8426e18bc6eb1ad812d9efe8200d57fbf61c73a4d171e,PodSandboxId:6c52aed8c7fa63e3ca1db928ef45fc317c5c67533ca3212d1a21f5869230c6fb,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef
218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759141312626930590,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xljfq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c6e265ac-ca21-4ddc-9600-9f5c7a60fe39,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af76a866d9f7161cb48ae968bea8d7c06363958b0000b7c8b685193619ae39f8,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153ae
dc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759141309208153597,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ca93f1803439bb8d7c0ee31afbb42e13ee5031c7de1fabe02a09494bae80ad5,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759141308069570915,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da4833f4415db4921306465d2fb4f126ca430c3d18c4a89eaa8f20e786ba8bb,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-r
egistrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759141306414514921,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:988aa6a5e8a50931ef09ec1acd19e3ac911593b645c53bf4003da182b1674dae,PodSandboxId:4e8a339701c1f8aa4201a090399d4b949ead09ce62cee98adb8df3a0e096602a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image
:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759141304799810576,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-bx82z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9010bb12-b7f9-43a6-85cc-4ea055c57a89,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b80e3a78fd38f1f51a6eefd8c4513909edb9b1053d3efaaee1ac3da4185108ae,PodSandboxId:a8bffbd0b48947ff0ac98962f5c658510cb0728c5e1fbf86c2847acb0688fbe6,Met
adata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759141304667906054,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-ldkqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56211c7-445f-47bc-979d-e6fb7ecca920,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1184f2460f2693ea6f8a8cec74a31ec4b065b23d8b9efdcaf7d9eaca4bf56b99,PodSand
boxId:26d005e1ee4992562de8fb92648009c0498759026fcf684e17b020f2022f85a0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759141302712950200,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8bg4m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 67e735e2-cc42-4d83-8149-dff4c064e226,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd5b676567c158363a1ee8f2bc3d6f9fa
a37e1e1c5769465c497759421eb837,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759141302581475900,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5d31e488abc87f72cee5c4a8e47a04bc935ae66848e742542705ec4ec98f5a,PodSandboxId:580026dcf573a1a642de0bba5f6189c52a03840599ea1cd5c05bc56a2842f167,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759141301215828552,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638e6c12-0662-47eb-8929-2e5ad0475f5e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524ce5f57761b95f68bef1a66bd35da700f6d7866c3217ac224b7711c93a6513,PodSandboxId:40500d85e8ee6bf1057285eeaa0ed2210f174216460e4d2049f944936f3d9504,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759141299428402112,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9fd31a0-37e1-4eec-a97f-a060c1a18bea,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65010026ccf4779ffbbf5a0d1b948ad224d2a7e064b4ef90af3448ede06a9ff,PodSandboxId:c415564a01e1fab92da8edae2e8824202bc486f37754027ab09d33eedd155c44,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759141286789793950,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-tp4c9,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: b33b4eee-87ed-427c-97fe-684dc1a39dc1,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb1fb889a566b019d028c434fcd1b749993ad201323e79f97aab274dfc347ce,PodSandboxId:6a9b5cb08e2bc5e57d63c8c6db0268901431aa3da3ac3e7f79e5bf4d64c54062,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759141279263661342,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a756c7b-7c15-49df-8410-36c37bdf4785,},Annotations:map[string]string{io.kubern
etes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b6f4ec2f78e909b787cbcfadf86a5962d851f2159dd1536bc864bb4c146942a,PodSandboxId:a5ffe00771c3b3619e024c11d22b51c4f3587f4c5bde7d6222f7c2b905b30476,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759141244627788814,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-jh557,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db58f7c-939d-4f8a-ad56-5e623bd97274,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8590713c2981f7e21a94ebe7a67b99f6cd9fe7a5b1d1e09f228f4b011567a991,PodSandboxId:38c60c0820a0d6aff995e82d2cefab3191781caeb135c427d83d8b51d8fd6bc8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759141243642108766,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 03841ce7-2069-4447-8adf-81b1e5233916,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c5c0be5e893e6cb715346a881e803fa92dd601e9a2829b7d1f07ac26f7787a,PodSandboxId:b478e3ec972282315c8ae9a1f15a19686b00bad35c1fddad651c6936db1c8618,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759141235709566509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lxh5,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: f4a50ee5-9d06-48e9-aeec-8e8fedfd92b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175a117fb6f06a3a250e33b7087fba88b740cfdf629e237f60ae0464b9de4eab,PodSandboxId:0d650e4b5f405a8659aec95c9a511629a431c4a60df6ab8393ac1713b86a6959,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759141235212839995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zhcch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abca3b04-811d-4342-831f-4568c9eb2ee7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a50327ef6012889c1d102209d8e88d4379ab8db2ce573d6b836416420edd50,PodSandboxId:04eeebd713634e07907eafd3a8303efc398fb4212e3caf61dddeace9c3777bf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759141222841471087,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edab1ff75c1cd7a0642fffd0b21cd736,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6dbae6113baa53e9504ec93e91af4dc56681d82f26ff33230ebb0ec68e7651,PodSandboxId:f208189bae6ea8042ad1470a0aa5d502dcf417de6417ddc74cbf1d8eb5ea4039,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa
46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759141222881788024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb644a85a1a2dd20a9929f14a1844358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7fd02945411862cbbf762bab42e24df4c87a418df8b35995e7dd8be37796636,PodSandboxId:2ab362827edd044925fd101b4d222362ad65e480d8d0f8a6f9691ad69dab263e,Metadata:&Cont
ainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759141222851945611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f152e69a7a65e5947151db70e65d9f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00a42bfe385199d06782828
9bf42f54827d8c441368629a7bc1f630b335746e,PodSandboxId:4232352893b52fd8c9e6c7c3bbbab8d9a22c6dab5d90a4f5240097504f8391e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759141222827263618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf0001919057aab7c9bba4425845358c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e904e7cd-71f8-4b9a-a0ef-c1814152352b name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.250696802Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a93bdd1-9f16-4bab-b933-56233c595f42 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.250891381Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a93bdd1-9f16-4bab-b933-56233c595f42 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.253124232Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8de4381-4b4a-4157-99c8-108521f15ce3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.254462080Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759141762254436725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508783,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8de4381-4b4a-4157-99c8-108521f15ce3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.255026162Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2878fec1-e6fc-4f0a-acf1-55b707854445 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.255102680Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2878fec1-e6fc-4f0a-acf1-55b707854445 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.255691552Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd2da61f9111a8f172a910334b72c950aad9cf7fcf0d041300bde9676dc9c4b5,PodSandboxId:760f3f111a462fe45783435331c2e5be1da2a299dca6f398620a88efd67623a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759141346666364450,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 50aa0ab4-8b35-4c2d-a178-4efae92e01df,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86299903225c275c16ba4ee1d779f033ab987579d4ac6422c19f2fd060e8a726,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759141341619672301,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d23b4a0ef79c2464e404d975c0d87785de3d7af5c843a051389a716ddc67865,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759141318540861603,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31c1763f6da5357250e1228bab85cc3d750958f66a3a5b7fd832b25bb0ff81c,PodSandboxId:03bb444700e14c181119a621393f5798c192136c811b6f3386b4b5152713ae09,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759141316748980793,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-vttt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2aad62c9-1c19-48f5-8b3c-05a46b75e030,},Annotations:map[string]s
tring{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7dbc3a7ea7e456bf87d8426e18bc6eb1ad812d9efe8200d57fbf61c73a4d171e,PodSandboxId:6c52aed8c7fa63e3ca1db928ef45fc317c5c67533ca3212d1a21f5869230c6fb,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef
218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759141312626930590,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xljfq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c6e265ac-ca21-4ddc-9600-9f5c7a60fe39,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af76a866d9f7161cb48ae968bea8d7c06363958b0000b7c8b685193619ae39f8,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153ae
dc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759141309208153597,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ca93f1803439bb8d7c0ee31afbb42e13ee5031c7de1fabe02a09494bae80ad5,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759141308069570915,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da4833f4415db4921306465d2fb4f126ca430c3d18c4a89eaa8f20e786ba8bb,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-r
egistrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759141306414514921,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:988aa6a5e8a50931ef09ec1acd19e3ac911593b645c53bf4003da182b1674dae,PodSandboxId:4e8a339701c1f8aa4201a090399d4b949ead09ce62cee98adb8df3a0e096602a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image
:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759141304799810576,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-bx82z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9010bb12-b7f9-43a6-85cc-4ea055c57a89,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b80e3a78fd38f1f51a6eefd8c4513909edb9b1053d3efaaee1ac3da4185108ae,PodSandboxId:a8bffbd0b48947ff0ac98962f5c658510cb0728c5e1fbf86c2847acb0688fbe6,Met
adata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759141304667906054,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-ldkqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56211c7-445f-47bc-979d-e6fb7ecca920,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1184f2460f2693ea6f8a8cec74a31ec4b065b23d8b9efdcaf7d9eaca4bf56b99,PodSand
boxId:26d005e1ee4992562de8fb92648009c0498759026fcf684e17b020f2022f85a0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759141302712950200,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8bg4m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 67e735e2-cc42-4d83-8149-dff4c064e226,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd5b676567c158363a1ee8f2bc3d6f9fa
a37e1e1c5769465c497759421eb837,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759141302581475900,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5d31e488abc87f72cee5c4a8e47a04bc935ae66848e742542705ec4ec98f5a,PodSandboxId:580026dcf573a1a642de0bba5f6189c52a03840599ea1cd5c05bc56a2842f167,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759141301215828552,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638e6c12-0662-47eb-8929-2e5ad0475f5e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524ce5f57761b95f68bef1a66bd35da700f6d7866c3217ac224b7711c93a6513,PodSandboxId:40500d85e8ee6bf1057285eeaa0ed2210f174216460e4d2049f944936f3d9504,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759141299428402112,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9fd31a0-37e1-4eec-a97f-a060c1a18bea,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65010026ccf4779ffbbf5a0d1b948ad224d2a7e064b4ef90af3448ede06a9ff,PodSandboxId:c415564a01e1fab92da8edae2e8824202bc486f37754027ab09d33eedd155c44,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759141286789793950,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-tp4c9,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: b33b4eee-87ed-427c-97fe-684dc1a39dc1,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb1fb889a566b019d028c434fcd1b749993ad201323e79f97aab274dfc347ce,PodSandboxId:6a9b5cb08e2bc5e57d63c8c6db0268901431aa3da3ac3e7f79e5bf4d64c54062,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759141279263661342,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a756c7b-7c15-49df-8410-36c37bdf4785,},Annotations:map[string]string{io.kubern
etes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b6f4ec2f78e909b787cbcfadf86a5962d851f2159dd1536bc864bb4c146942a,PodSandboxId:a5ffe00771c3b3619e024c11d22b51c4f3587f4c5bde7d6222f7c2b905b30476,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759141244627788814,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-jh557,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db58f7c-939d-4f8a-ad56-5e623bd97274,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8590713c2981f7e21a94ebe7a67b99f6cd9fe7a5b1d1e09f228f4b011567a991,PodSandboxId:38c60c0820a0d6aff995e82d2cefab3191781caeb135c427d83d8b51d8fd6bc8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759141243642108766,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 03841ce7-2069-4447-8adf-81b1e5233916,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c5c0be5e893e6cb715346a881e803fa92dd601e9a2829b7d1f07ac26f7787a,PodSandboxId:b478e3ec972282315c8ae9a1f15a19686b00bad35c1fddad651c6936db1c8618,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759141235709566509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lxh5,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: f4a50ee5-9d06-48e9-aeec-8e8fedfd92b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175a117fb6f06a3a250e33b7087fba88b740cfdf629e237f60ae0464b9de4eab,PodSandboxId:0d650e4b5f405a8659aec95c9a511629a431c4a60df6ab8393ac1713b86a6959,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759141235212839995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zhcch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abca3b04-811d-4342-831f-4568c9eb2ee7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a50327ef6012889c1d102209d8e88d4379ab8db2ce573d6b836416420edd50,PodSandboxId:04eeebd713634e07907eafd3a8303efc398fb4212e3caf61dddeace9c3777bf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759141222841471087,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edab1ff75c1cd7a0642fffd0b21cd736,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6dbae6113baa53e9504ec93e91af4dc56681d82f26ff33230ebb0ec68e7651,PodSandboxId:f208189bae6ea8042ad1470a0aa5d502dcf417de6417ddc74cbf1d8eb5ea4039,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa
46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759141222881788024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb644a85a1a2dd20a9929f14a1844358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7fd02945411862cbbf762bab42e24df4c87a418df8b35995e7dd8be37796636,PodSandboxId:2ab362827edd044925fd101b4d222362ad65e480d8d0f8a6f9691ad69dab263e,Metadata:&Cont
ainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759141222851945611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f152e69a7a65e5947151db70e65d9f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00a42bfe385199d06782828
9bf42f54827d8c441368629a7bc1f630b335746e,PodSandboxId:4232352893b52fd8c9e6c7c3bbbab8d9a22c6dab5d90a4f5240097504f8391e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759141222827263618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf0001919057aab7c9bba4425845358c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2878fec1-e6fc-4f0a-acf1-55b707854445 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.296595621Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=30084338-f479-4234-8fdb-1fb7366850e9 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.296690945Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30084338-f479-4234-8fdb-1fb7366850e9 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.297880359Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e20c642-ba66-4661-928b-fa351540389f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.300243456Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759141762300219619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508783,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e20c642-ba66-4661-928b-fa351540389f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.301010666Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0814cf39-c450-4df5-b7ac-b538c6c8de1f name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.301470160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0814cf39-c450-4df5-b7ac-b538c6c8de1f name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:29:22 addons-911532 crio[817]: time="2025-09-29 10:29:22.302425297Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd2da61f9111a8f172a910334b72c950aad9cf7fcf0d041300bde9676dc9c4b5,PodSandboxId:760f3f111a462fe45783435331c2e5be1da2a299dca6f398620a88efd67623a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759141346666364450,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 50aa0ab4-8b35-4c2d-a178-4efae92e01df,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86299903225c275c16ba4ee1d779f033ab987579d4ac6422c19f2fd060e8a726,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759141341619672301,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d23b4a0ef79c2464e404d975c0d87785de3d7af5c843a051389a716ddc67865,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759141318540861603,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31c1763f6da5357250e1228bab85cc3d750958f66a3a5b7fd832b25bb0ff81c,PodSandboxId:03bb444700e14c181119a621393f5798c192136c811b6f3386b4b5152713ae09,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759141316748980793,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-vttt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2aad62c9-1c19-48f5-8b3c-05a46b75e030,},Annotations:map[string]s
tring{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7dbc3a7ea7e456bf87d8426e18bc6eb1ad812d9efe8200d57fbf61c73a4d171e,PodSandboxId:6c52aed8c7fa63e3ca1db928ef45fc317c5c67533ca3212d1a21f5869230c6fb,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef
218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759141312626930590,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xljfq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c6e265ac-ca21-4ddc-9600-9f5c7a60fe39,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af76a866d9f7161cb48ae968bea8d7c06363958b0000b7c8b685193619ae39f8,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153ae
dc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759141309208153597,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ca93f1803439bb8d7c0ee31afbb42e13ee5031c7de1fabe02a09494bae80ad5,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759141308069570915,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da4833f4415db4921306465d2fb4f126ca430c3d18c4a89eaa8f20e786ba8bb,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-r
egistrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759141306414514921,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:988aa6a5e8a50931ef09ec1acd19e3ac911593b645c53bf4003da182b1674dae,PodSandboxId:4e8a339701c1f8aa4201a090399d4b949ead09ce62cee98adb8df3a0e096602a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image
:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759141304799810576,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-bx82z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9010bb12-b7f9-43a6-85cc-4ea055c57a89,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b80e3a78fd38f1f51a6eefd8c4513909edb9b1053d3efaaee1ac3da4185108ae,PodSandboxId:a8bffbd0b48947ff0ac98962f5c658510cb0728c5e1fbf86c2847acb0688fbe6,Met
adata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759141304667906054,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-ldkqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56211c7-445f-47bc-979d-e6fb7ecca920,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1184f2460f2693ea6f8a8cec74a31ec4b065b23d8b9efdcaf7d9eaca4bf56b99,PodSand
boxId:26d005e1ee4992562de8fb92648009c0498759026fcf684e17b020f2022f85a0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759141302712950200,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8bg4m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 67e735e2-cc42-4d83-8149-dff4c064e226,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd5b676567c158363a1ee8f2bc3d6f9fa
a37e1e1c5769465c497759421eb837,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759141302581475900,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5d31e488abc87f72cee5c4a8e47a04bc935ae66848e742542705ec4ec98f5a,PodSandboxId:580026dcf573a1a642de0bba5f6189c52a03840599ea1cd5c05bc56a2842f167,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759141301215828552,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638e6c12-0662-47eb-8929-2e5ad0475f5e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524ce5f57761b95f68bef1a66bd35da700f6d7866c3217ac224b7711c93a6513,PodSandboxId:40500d85e8ee6bf1057285eeaa0ed2210f174216460e4d2049f944936f3d9504,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759141299428402112,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9fd31a0-37e1-4eec-a97f-a060c1a18bea,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65010026ccf4779ffbbf5a0d1b948ad224d2a7e064b4ef90af3448ede06a9ff,PodSandboxId:c415564a01e1fab92da8edae2e8824202bc486f37754027ab09d33eedd155c44,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759141286789793950,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-tp4c9,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: b33b4eee-87ed-427c-97fe-684dc1a39dc1,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb1fb889a566b019d028c434fcd1b749993ad201323e79f97aab274dfc347ce,PodSandboxId:6a9b5cb08e2bc5e57d63c8c6db0268901431aa3da3ac3e7f79e5bf4d64c54062,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759141279263661342,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a756c7b-7c15-49df-8410-36c37bdf4785,},Annotations:map[string]string{io.kubern
etes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b6f4ec2f78e909b787cbcfadf86a5962d851f2159dd1536bc864bb4c146942a,PodSandboxId:a5ffe00771c3b3619e024c11d22b51c4f3587f4c5bde7d6222f7c2b905b30476,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759141244627788814,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-jh557,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db58f7c-939d-4f8a-ad56-5e623bd97274,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8590713c2981f7e21a94ebe7a67b99f6cd9fe7a5b1d1e09f228f4b011567a991,PodSandboxId:38c60c0820a0d6aff995e82d2cefab3191781caeb135c427d83d8b51d8fd6bc8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759141243642108766,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 03841ce7-2069-4447-8adf-81b1e5233916,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c5c0be5e893e6cb715346a881e803fa92dd601e9a2829b7d1f07ac26f7787a,PodSandboxId:b478e3ec972282315c8ae9a1f15a19686b00bad35c1fddad651c6936db1c8618,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759141235709566509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lxh5,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: f4a50ee5-9d06-48e9-aeec-8e8fedfd92b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175a117fb6f06a3a250e33b7087fba88b740cfdf629e237f60ae0464b9de4eab,PodSandboxId:0d650e4b5f405a8659aec95c9a511629a431c4a60df6ab8393ac1713b86a6959,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759141235212839995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zhcch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abca3b04-811d-4342-831f-4568c9eb2ee7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a50327ef6012889c1d102209d8e88d4379ab8db2ce573d6b836416420edd50,PodSandboxId:04eeebd713634e07907eafd3a8303efc398fb4212e3caf61dddeace9c3777bf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759141222841471087,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edab1ff75c1cd7a0642fffd0b21cd736,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6dbae6113baa53e9504ec93e91af4dc56681d82f26ff33230ebb0ec68e7651,PodSandboxId:f208189bae6ea8042ad1470a0aa5d502dcf417de6417ddc74cbf1d8eb5ea4039,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa
46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759141222881788024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb644a85a1a2dd20a9929f14a1844358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7fd02945411862cbbf762bab42e24df4c87a418df8b35995e7dd8be37796636,PodSandboxId:2ab362827edd044925fd101b4d222362ad65e480d8d0f8a6f9691ad69dab263e,Metadata:&Cont
ainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759141222851945611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f152e69a7a65e5947151db70e65d9f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00a42bfe385199d06782828
9bf42f54827d8c441368629a7bc1f630b335746e,PodSandboxId:4232352893b52fd8c9e6c7c3bbbab8d9a22c6dab5d90a4f5240097504f8391e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759141222827263618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf0001919057aab7c9bba4425845358c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0814cf39-c450-4df5-b7ac-b538c6c8de1f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	dd2da61f9111a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   760f3f111a462       busybox
	86299903225c2       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          7 minutes ago       Running             csi-snapshotter                          0                   caa01a136f6dd       csi-hostpathplugin-zrj57
	3d23b4a0ef79c       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          7 minutes ago       Running             csi-provisioner                          0                   caa01a136f6dd       csi-hostpathplugin-zrj57
	f31c1763f6da5       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             7 minutes ago       Running             controller                               0                   03bb444700e14       ingress-nginx-controller-9cc49f96f-vttt9
	7dbc3a7ea7e45       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                                             7 minutes ago       Exited              patch                                    2                   6c52aed8c7fa6       ingress-nginx-admission-patch-xljfq
	af76a866d9f71       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   caa01a136f6dd       csi-hostpathplugin-zrj57
	5ca93f1803439       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           7 minutes ago       Running             hostpath                                 0                   caa01a136f6dd       csi-hostpathplugin-zrj57
	9da4833f4415d       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   caa01a136f6dd       csi-hostpathplugin-zrj57
	988aa6a5e8a50       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   4e8a339701c1f       snapshot-controller-7d9fbc56b8-bx82z
	b80e3a78fd38f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   a8bffbd0b4894       snapshot-controller-7d9fbc56b8-ldkqf
	1184f2460f269       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   7 minutes ago       Exited              create                                   0                   26d005e1ee499       ingress-nginx-admission-create-8bg4m
	6cd5b676567c1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   7 minutes ago       Running             csi-external-health-monitor-controller   0                   caa01a136f6dd       csi-hostpathplugin-zrj57
	0f5d31e488abc       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              7 minutes ago       Running             csi-resizer                              0                   580026dcf573a       csi-hostpath-resizer-0
	524ce5f57761b       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             7 minutes ago       Running             csi-attacher                             0                   40500d85e8ee6       csi-hostpath-attacher-0
	d65010026ccf4       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            7 minutes ago       Running             gadget                                   0                   c415564a01e1f       gadget-tp4c9
	efb1fb889a566       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               8 minutes ago       Running             minikube-ingress-dns                     0                   6a9b5cb08e2bc       kube-ingress-dns-minikube
	9b6f4ec2f78e9       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     8 minutes ago       Running             amd-gpu-device-plugin                    0                   a5ffe00771c3b       amd-gpu-device-plugin-jh557
	8590713c2981f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             8 minutes ago       Running             storage-provisioner                      0                   38c60c0820a0d       storage-provisioner
	b6c5c0be5e893       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             8 minutes ago       Running             coredns                                  0                   b478e3ec97228       coredns-66bc5c9577-2lxh5
	175a117fb6f06       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                                             8 minutes ago       Running             kube-proxy                               0                   0d650e4b5f405       kube-proxy-zhcch
	3b6dbae6113ba       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             8 minutes ago       Running             etcd                                     0                   f208189bae6ea       etcd-addons-911532
	a7fd029454118       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                                             8 minutes ago       Running             kube-controller-manager                  0                   2ab362827edd0       kube-controller-manager-addons-911532
	e0a50327ef601       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                                             8 minutes ago       Running             kube-scheduler                           0                   04eeebd713634       kube-scheduler-addons-911532
	a00a42bfe3851       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                                             8 minutes ago       Running             kube-apiserver                           0                   4232352893b52       kube-apiserver-addons-911532
	
	
	==> coredns [b6c5c0be5e893e6cb715346a881e803fa92dd601e9a2829b7d1f07ac26f7787a] <==
	[INFO] 10.244.0.8:50652 - 16984 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000291243s
	[INFO] 10.244.0.8:50652 - 50804 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000151578s
	[INFO] 10.244.0.8:50652 - 20738 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000103041s
	[INFO] 10.244.0.8:50652 - 42178 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000141825s
	[INFO] 10.244.0.8:50652 - 37241 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000104758s
	[INFO] 10.244.0.8:50652 - 56970 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00015054s
	[INFO] 10.244.0.8:50652 - 44050 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000117583s
	[INFO] 10.244.0.8:48716 - 14813 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000130702s
	[INFO] 10.244.0.8:48716 - 15156 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000208908s
	[INFO] 10.244.0.8:37606 - 64555 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000146123s
	[INFO] 10.244.0.8:37606 - 64844 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00012694s
	[INFO] 10.244.0.8:46483 - 39882 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094662s
	[INFO] 10.244.0.8:46483 - 40157 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000344836s
	[INFO] 10.244.0.8:39149 - 27052 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000128832s
	[INFO] 10.244.0.8:39149 - 26844 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000220783s
	[INFO] 10.244.0.23:43438 - 39803 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000622841s
	[INFO] 10.244.0.23:47210 - 22362 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000808655s
	[INFO] 10.244.0.23:54815 - 54620 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000102275s
	[INFO] 10.244.0.23:48706 - 23486 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000290579s
	[INFO] 10.244.0.23:35174 - 37530 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095187s
	[INFO] 10.244.0.23:58302 - 160 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000148316s
	[INFO] 10.244.0.23:60222 - 18112 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001543386s
	[INFO] 10.244.0.23:42303 - 24400 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.005221068s
	[INFO] 10.244.0.27:57662 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000379174s
	[INFO] 10.244.0.27:52524 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000831634s
	
	
	==> describe nodes <==
	Name:               addons-911532
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-911532
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c703192fb7638284bed1945941837d6f5d9e8170
	                    minikube.k8s.io/name=addons-911532
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_20_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-911532
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-911532"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:20:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-911532
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 10:29:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 10:24:02 +0000   Mon, 29 Sep 2025 10:20:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 10:24:02 +0000   Mon, 29 Sep 2025 10:20:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 10:24:02 +0000   Mon, 29 Sep 2025 10:20:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 10:24:02 +0000   Mon, 29 Sep 2025 10:20:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.179
	  Hostname:    addons-911532
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c8a2bbd76874c1a8020738f402773b8
	  System UUID:                0c8a2bbd-7687-4c1a-8020-738f402773b8
	  Boot ID:                    9d51dc84-868d-42de-9a46-75702ae9a571
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m58s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  gadget                      gadget-tp4c9                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m40s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-vttt9    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         8m39s
	  kube-system                 amd-gpu-device-plugin-jh557                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m45s
	  kube-system                 coredns-66bc5c9577-2lxh5                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m48s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m38s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m38s
	  kube-system                 csi-hostpathplugin-zrj57                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m38s
	  kube-system                 etcd-addons-911532                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m54s
	  kube-system                 kube-apiserver-addons-911532                250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m54s
	  kube-system                 kube-controller-manager-addons-911532       200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m54s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m42s
	  kube-system                 kube-proxy-zhcch                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m49s
	  kube-system                 kube-scheduler-addons-911532                100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m54s
	  kube-system                 snapshot-controller-7d9fbc56b8-bx82z        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 snapshot-controller-7d9fbc56b8-ldkqf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m46s                kube-proxy       
	  Normal  Starting                 9m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m1s (x8 over 9m1s)  kubelet          Node addons-911532 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m1s (x8 over 9m1s)  kubelet          Node addons-911532 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m1s (x7 over 9m1s)  kubelet          Node addons-911532 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m54s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m54s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m54s                kubelet          Node addons-911532 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m54s                kubelet          Node addons-911532 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m54s                kubelet          Node addons-911532 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m53s                kubelet          Node addons-911532 status is now: NodeReady
	  Normal  RegisteredNode           8m50s                node-controller  Node addons-911532 event: Registered Node addons-911532 in Controller
	
	
	==> dmesg <==
	[Sep29 10:21] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.855059] kauditd_printk_skb: 20 callbacks suppressed
	[  +9.424741] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.533609] kauditd_printk_skb: 26 callbacks suppressed
	[  +8.677200] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.756729] kauditd_printk_skb: 57 callbacks suppressed
	[  +2.380361] kauditd_printk_skb: 115 callbacks suppressed
	[  +4.682193] kauditd_printk_skb: 120 callbacks suppressed
	[  +4.066585] kauditd_printk_skb: 83 callbacks suppressed
	[Sep29 10:22] kauditd_printk_skb: 11 callbacks suppressed
	[ +10.687590] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.000071] kauditd_printk_skb: 26 callbacks suppressed
	[ +12.038379] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000030] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.163052] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.578786] kauditd_printk_skb: 46 callbacks suppressed
	[Sep29 10:23] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.000124] kauditd_printk_skb: 22 callbacks suppressed
	[ +30.032876] kauditd_printk_skb: 26 callbacks suppressed
	[  +2.772049] kauditd_printk_skb: 107 callbacks suppressed
	[Sep29 10:24] kauditd_printk_skb: 54 callbacks suppressed
	[ +50.465336] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.000104] kauditd_printk_skb: 9 callbacks suppressed
	[Sep29 10:25] kauditd_printk_skb: 26 callbacks suppressed
	[Sep29 10:28] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [3b6dbae6113baa53e9504ec93e91af4dc56681d82f26ff33230ebb0ec68e7651] <==
	{"level":"warn","ts":"2025-09-29T10:21:33.629920Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.200713ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:21:33.629938Z","caller":"traceutil/trace.go:172","msg":"trace[582963862] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1029; }","duration":"153.249831ms","start":"2025-09-29T10:21:33.476683Z","end":"2025-09-29T10:21:33.629933Z","steps":["trace[582963862] 'agreement among raft nodes before linearized reading'  (duration: 153.191902ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:21:33.632640Z","caller":"traceutil/trace.go:172","msg":"trace[5721142] transaction","detail":"{read_only:false; response_revision:1030; number_of_response:1; }","duration":"196.15975ms","start":"2025-09-29T10:21:33.435470Z","end":"2025-09-29T10:21:33.631629Z","steps":["trace[5721142] 'process raft request'  (duration: 194.644961ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:21:39.088425Z","caller":"traceutil/trace.go:172","msg":"trace[1920545131] linearizableReadLoop","detail":"{readStateIndex:1075; appliedIndex:1075; }","duration":"165.718933ms","start":"2025-09-29T10:21:38.922692Z","end":"2025-09-29T10:21:39.088411Z","steps":["trace[1920545131] 'read index received'  (duration: 165.713078ms)","trace[1920545131] 'applied index is now lower than readState.Index'  (duration: 5.095µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T10:21:39.088595Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.848818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:21:39.088625Z","caller":"traceutil/trace.go:172","msg":"trace[1181994063] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1047; }","duration":"165.92758ms","start":"2025-09-29T10:21:38.922688Z","end":"2025-09-29T10:21:39.088616Z","steps":["trace[1181994063] 'agreement among raft nodes before linearized reading'  (duration: 165.822615ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T10:21:39.089113Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"164.606269ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-xljfq\" limit:1 ","response":"range_response_count:1 size:4722"}
	{"level":"info","ts":"2025-09-29T10:21:39.089162Z","caller":"traceutil/trace.go:172","msg":"trace[517473847] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-xljfq; range_end:; response_count:1; response_revision:1048; }","duration":"164.659529ms","start":"2025-09-29T10:21:38.924494Z","end":"2025-09-29T10:21:39.089153Z","steps":["trace[517473847] 'agreement among raft nodes before linearized reading'  (duration: 164.533832ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:21:39.089291Z","caller":"traceutil/trace.go:172","msg":"trace[103671638] transaction","detail":"{read_only:false; response_revision:1048; number_of_response:1; }","duration":"167.91547ms","start":"2025-09-29T10:21:38.921368Z","end":"2025-09-29T10:21:39.089284Z","steps":["trace[103671638] 'process raft request'  (duration: 167.512399ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:21:41.128032Z","caller":"traceutil/trace.go:172","msg":"trace[1380742237] transaction","detail":"{read_only:false; response_revision:1059; number_of_response:1; }","duration":"160.629944ms","start":"2025-09-29T10:21:40.967387Z","end":"2025-09-29T10:21:41.128017Z","steps":["trace[1380742237] 'process raft request'  (duration: 160.428456ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:21:52.740363Z","caller":"traceutil/trace.go:172","msg":"trace[100017207] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"122.049264ms","start":"2025-09-29T10:21:52.618297Z","end":"2025-09-29T10:21:52.740347Z","steps":["trace[100017207] 'process raft request'  (duration: 121.808982ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:21:56.234316Z","caller":"traceutil/trace.go:172","msg":"trace[1596468790] linearizableReadLoop","detail":"{readStateIndex:1190; appliedIndex:1190; }","duration":"200.26342ms","start":"2025-09-29T10:21:56.034037Z","end":"2025-09-29T10:21:56.234300Z","steps":["trace[1596468790] 'read index received'  (duration: 200.256637ms)","trace[1596468790] 'applied index is now lower than readState.Index'  (duration: 6.184µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T10:21:56.234915Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"200.854605ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:21:56.235162Z","caller":"traceutil/trace.go:172","msg":"trace[794219373] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses; range_end:; response_count:0; response_revision:1159; }","duration":"201.11834ms","start":"2025-09-29T10:21:56.034033Z","end":"2025-09-29T10:21:56.235151Z","steps":["trace[794219373] 'agreement among raft nodes before linearized reading'  (duration: 200.701253ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:21:56.235298Z","caller":"traceutil/trace.go:172","msg":"trace[1282453769] transaction","detail":"{read_only:false; response_revision:1160; number_of_response:1; }","duration":"273.44806ms","start":"2025-09-29T10:21:55.961839Z","end":"2025-09-29T10:21:56.235287Z","steps":["trace[1282453769] 'process raft request'  (duration: 272.570369ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:23:49.922596Z","caller":"traceutil/trace.go:172","msg":"trace[1297543237] transaction","detail":"{read_only:false; response_revision:1563; number_of_response:1; }","duration":"107.889005ms","start":"2025-09-29T10:23:49.814676Z","end":"2025-09-29T10:23:49.922565Z","steps":["trace[1297543237] 'process raft request'  (duration: 107.763843ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:23:56.906428Z","caller":"traceutil/trace.go:172","msg":"trace[852559153] linearizableReadLoop","detail":"{readStateIndex:1673; appliedIndex:1673; }","duration":"207.27017ms","start":"2025-09-29T10:23:56.699140Z","end":"2025-09-29T10:23:56.906410Z","steps":["trace[852559153] 'read index received'  (duration: 207.264352ms)","trace[852559153] 'applied index is now lower than readState.Index'  (duration: 4.799µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T10:23:56.906582Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"207.425338ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:23:56.906604Z","caller":"traceutil/trace.go:172","msg":"trace[159869457] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1610; }","duration":"207.488053ms","start":"2025-09-29T10:23:56.699111Z","end":"2025-09-29T10:23:56.906599Z","steps":["trace[159869457] 'agreement among raft nodes before linearized reading'  (duration: 207.399273ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T10:23:56.906732Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"168.171419ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-937c6346-84b7-4f57-ba02-2f7990d0e2d0\" limit:1 ","response":"range_response_count:1 size:4572"}
	{"level":"info","ts":"2025-09-29T10:23:56.906788Z","caller":"traceutil/trace.go:172","msg":"trace[1828175108] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-937c6346-84b7-4f57-ba02-2f7990d0e2d0; range_end:; response_count:1; response_revision:1611; }","duration":"168.215755ms","start":"2025-09-29T10:23:56.738542Z","end":"2025-09-29T10:23:56.906758Z","steps":["trace[1828175108] 'agreement among raft nodes before linearized reading'  (duration: 168.108786ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:23:56.906872Z","caller":"traceutil/trace.go:172","msg":"trace[928903816] transaction","detail":"{read_only:false; response_revision:1611; number_of_response:1; }","duration":"363.567544ms","start":"2025-09-29T10:23:56.543297Z","end":"2025-09-29T10:23:56.906865Z","steps":["trace[928903816] 'process raft request'  (duration: 363.245361ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T10:23:56.906973Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.243902ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-09-29T10:23:56.906980Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-29T10:23:56.543275Z","time spent":"363.614208ms","remote":"127.0.0.1:49608","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1603 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-09-29T10:23:56.906992Z","caller":"traceutil/trace.go:172","msg":"trace[679122027] range","detail":"{range_begin:/registry/csinodes; range_end:; response_count:0; response_revision:1611; }","duration":"126.265845ms","start":"2025-09-29T10:23:56.780721Z","end":"2025-09-29T10:23:56.906987Z","steps":["trace[679122027] 'agreement among raft nodes before linearized reading'  (duration: 126.228069ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:29:22 up 9 min,  0 users,  load average: 0.76, 0.71, 0.53
	Linux addons-911532 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [a00a42bfe385199d067828289bf42f54827d8c441368629a7bc1f630b335746e] <==
	E0929 10:21:30.804776       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.37.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.37.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.37.37:443: connect: connection refused" logger="UnhandledError"
	E0929 10:21:30.826075       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.37.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.37.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.37.37:443: connect: connection refused" logger="UnhandledError"
	E0929 10:21:30.867726       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.37.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.37.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.37.37:443: connect: connection refused" logger="UnhandledError"
	E0929 10:21:30.949357       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.37.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.37.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.37.37:443: connect: connection refused" logger="UnhandledError"
	I0929 10:21:31.164100       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0929 10:21:36.538229       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:21:38.492287       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0929 10:22:34.164479       1 conn.go:339] Error on socket receive: read tcp 192.168.39.179:8443->192.168.39.1:36626: use of closed network connection
	E0929 10:22:34.340442       1 conn.go:339] Error on socket receive: read tcp 192.168.39.179:8443->192.168.39.1:36648: use of closed network connection
	I0929 10:22:43.843704       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.159.127"}
	I0929 10:22:47.142596       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:22:56.238986       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:23:31.826473       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 10:24:03.043737       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 10:24:03.224458       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.145.250"}
	I0929 10:24:09.259241       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:24:18.836921       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:25:29.628240       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:25:45.763121       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:26:37.011596       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:26:47.943967       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:27:48.771890       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:27:57.050958       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:28:53.219223       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:29:15.898925       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [a7fd02945411862cbbf762bab42e24df4c87a418df8b35995e7dd8be37796636] <==
	I0929 10:21:03.242834       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:21:30.779413       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/metrics-server" err="EndpointSlice informer cache is out of date"
	I0929 10:22:47.640151       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0929 10:24:07.669590       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0929 10:24:11.181949       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	E0929 10:27:47.980833       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 10:28:02.981587       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 10:28:17.982618       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 10:28:27.995445       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: failed to delete pods for namespace: local-path-storage, err: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 10:28:28.062681       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: failed to delete pods for namespace: local-path-storage, err: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 10:28:28.088270       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: failed to delete pods for namespace: local-path-storage, err: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 10:28:28.123724       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: failed to delete pods for namespace: local-path-storage, err: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 10:28:28.182004       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: failed to delete pods for namespace: local-path-storage, err: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 10:28:28.277041       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: failed to delete pods for namespace: local-path-storage, err: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 10:28:28.452315       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: failed to delete pods for namespace: local-path-storage, err: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 10:28:28.792551       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: failed to delete pods for namespace: local-path-storage, err: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 10:28:29.447648       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: failed to delete pods for namespace: local-path-storage, err: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 10:28:30.749664       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: failed to delete pods for namespace: local-path-storage, err: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 10:28:32.983465       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 10:28:33.324866       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: failed to delete pods for namespace: local-path-storage, err: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 10:28:38.459932       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: failed to delete pods for namespace: local-path-storage, err: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0929 10:28:47.984274       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	I0929 10:28:53.985565       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	E0929 10:29:02.984368       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 10:29:17.985402       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	
	
	==> kube-proxy [175a117fb6f06a3a250e33b7087fba88b740cfdf629e237f60ae0464b9de4eab] <==
	I0929 10:20:35.986576       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:20:36.189499       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:20:36.189548       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.179"]
	E0929 10:20:36.189623       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:20:36.301867       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 10:20:36.301934       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 10:20:36.301961       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:20:36.326623       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:20:36.327146       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:20:36.327246       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:20:36.336796       1 config.go:200] "Starting service config controller"
	I0929 10:20:36.336830       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:20:36.336848       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:20:36.336851       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:20:36.336861       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:20:36.336866       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:20:36.342731       1 config.go:309] "Starting node config controller"
	I0929 10:20:36.342767       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:20:36.342774       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:20:36.437304       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:20:36.437613       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:20:36.437632       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e0a50327ef6012889c1d102209d8e88d4379ab8db2ce573d6b836416420edd50] <==
	E0929 10:20:26.063633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:20:26.064483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:20:26.064623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 10:20:26.064815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 10:20:26.065104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 10:20:26.069817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 10:20:26.071395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 10:20:26.072119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 10:20:26.073653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 10:20:26.073850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 10:20:26.074029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 10:20:26.883755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 10:20:26.932747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 10:20:26.936951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 10:20:26.973390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 10:20:26.982912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 10:20:27.004100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 10:20:27.067449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 10:20:27.073035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:20:27.168604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 10:20:27.203313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 10:20:27.256704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 10:20:27.286622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 10:20:27.625245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 10:20:29.547277       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 10:28:42 addons-911532 kubelet[1498]: E0929 10:28:42.484580    1498 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:28:42 addons-911532 kubelet[1498]: E0929 10:28:42.484640    1498 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:28:42 addons-911532 kubelet[1498]: E0929 10:28:42.485018    1498 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-937c6346-84b7-4f57-ba02-2f7990d0e2d0_local-path-storage(578a5a7c-d138-4bb8-a5f0-099878d77d28): ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 10:28:42 addons-911532 kubelet[1498]: E0929 10:28:42.485063    1498 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-937c6346-84b7-4f57-ba02-2f7990d0e2d0" podUID="578a5a7c-d138-4bb8-a5f0-099878d77d28"
	Sep 29 10:28:42 addons-911532 kubelet[1498]: I0929 10:28:42.729918    1498 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/578a5a7c-d138-4bb8-a5f0-099878d77d28-data\") pod \"578a5a7c-d138-4bb8-a5f0-099878d77d28\" (UID: \"578a5a7c-d138-4bb8-a5f0-099878d77d28\") "
	Sep 29 10:28:42 addons-911532 kubelet[1498]: I0929 10:28:42.730010    1498 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt2zm\" (UniqueName: \"kubernetes.io/projected/578a5a7c-d138-4bb8-a5f0-099878d77d28-kube-api-access-pt2zm\") pod \"578a5a7c-d138-4bb8-a5f0-099878d77d28\" (UID: \"578a5a7c-d138-4bb8-a5f0-099878d77d28\") "
	Sep 29 10:28:42 addons-911532 kubelet[1498]: I0929 10:28:42.730036    1498 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/578a5a7c-d138-4bb8-a5f0-099878d77d28-script\") pod \"578a5a7c-d138-4bb8-a5f0-099878d77d28\" (UID: \"578a5a7c-d138-4bb8-a5f0-099878d77d28\") "
	Sep 29 10:28:42 addons-911532 kubelet[1498]: I0929 10:28:42.730239    1498 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/578a5a7c-d138-4bb8-a5f0-099878d77d28-data" (OuterVolumeSpecName: "data") pod "578a5a7c-d138-4bb8-a5f0-099878d77d28" (UID: "578a5a7c-d138-4bb8-a5f0-099878d77d28"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Sep 29 10:28:42 addons-911532 kubelet[1498]: I0929 10:28:42.730543    1498 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/578a5a7c-d138-4bb8-a5f0-099878d77d28-script" (OuterVolumeSpecName: "script") pod "578a5a7c-d138-4bb8-a5f0-099878d77d28" (UID: "578a5a7c-d138-4bb8-a5f0-099878d77d28"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Sep 29 10:28:42 addons-911532 kubelet[1498]: I0929 10:28:42.733244    1498 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/578a5a7c-d138-4bb8-a5f0-099878d77d28-kube-api-access-pt2zm" (OuterVolumeSpecName: "kube-api-access-pt2zm") pod "578a5a7c-d138-4bb8-a5f0-099878d77d28" (UID: "578a5a7c-d138-4bb8-a5f0-099878d77d28"). InnerVolumeSpecName "kube-api-access-pt2zm". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Sep 29 10:28:42 addons-911532 kubelet[1498]: I0929 10:28:42.831221    1498 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pt2zm\" (UniqueName: \"kubernetes.io/projected/578a5a7c-d138-4bb8-a5f0-099878d77d28-kube-api-access-pt2zm\") on node \"addons-911532\" DevicePath \"\""
	Sep 29 10:28:42 addons-911532 kubelet[1498]: I0929 10:28:42.831377    1498 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/578a5a7c-d138-4bb8-a5f0-099878d77d28-script\") on node \"addons-911532\" DevicePath \"\""
	Sep 29 10:28:42 addons-911532 kubelet[1498]: I0929 10:28:42.831411    1498 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/578a5a7c-d138-4bb8-a5f0-099878d77d28-data\") on node \"addons-911532\" DevicePath \"\""
	Sep 29 10:28:44 addons-911532 kubelet[1498]: I0929 10:28:44.605751    1498 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="578a5a7c-d138-4bb8-a5f0-099878d77d28" path="/var/lib/kubelet/pods/578a5a7c-d138-4bb8-a5f0-099878d77d28/volumes"
	Sep 29 10:28:47 addons-911532 kubelet[1498]: I0929 10:28:47.599451    1498 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 10:28:49 addons-911532 kubelet[1498]: E0929 10:28:49.157388    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141729156973177  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:28:49 addons-911532 kubelet[1498]: E0929 10:28:49.157414    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141729156973177  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:28:51 addons-911532 kubelet[1498]: E0929 10:28:51.600014    1498 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="19fbb660-be46-4ddb-af92-da7e55790348"
	Sep 29 10:28:59 addons-911532 kubelet[1498]: E0929 10:28:59.160611    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141739160042115  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:28:59 addons-911532 kubelet[1498]: E0929 10:28:59.160652    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141739160042115  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:29:09 addons-911532 kubelet[1498]: E0929 10:29:09.164029    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141749163348798  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:29:09 addons-911532 kubelet[1498]: E0929 10:29:09.164118    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141749163348798  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:29:12 addons-911532 kubelet[1498]: I0929 10:29:12.599750    1498 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-jh557" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 10:29:19 addons-911532 kubelet[1498]: E0929 10:29:19.166938    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141759166604626  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:29:19 addons-911532 kubelet[1498]: E0929 10:29:19.166979    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141759166604626  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	
	
	==> storage-provisioner [8590713c2981f7e21a94ebe7a67b99f6cd9fe7a5b1d1e09f228f4b011567a991] <==
	W0929 10:28:58.635520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:00.640426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:00.645528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:02.649843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:02.657464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:04.660826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:04.666645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:06.670712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:06.677902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:08.683436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:08.692724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:10.699741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:10.708079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:12.711554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:12.719298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:14.722816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:14.727980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:16.731918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:16.737007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:18.742959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:18.749054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:20.753240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:20.761260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:22.767102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:29:22.777415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-911532 -n addons-911532
helpers_test.go:269: (dbg) Run:  kubectl --context addons-911532 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-8bg4m ingress-nginx-admission-patch-xljfq
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-911532 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-8bg4m ingress-nginx-admission-patch-xljfq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-911532 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-8bg4m ingress-nginx-admission-patch-xljfq: exit status 1 (81.527689ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-911532/192.168.39.179
	Start Time:       Mon, 29 Sep 2025 10:24:03 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j4bxx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j4bxx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m20s                 default-scheduler  Successfully assigned default/nginx to addons-911532
	  Warning  Failed     102s (x2 over 3m45s)  kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     102s (x2 over 3m45s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    88s (x2 over 3m44s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     88s (x2 over 3m44s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    77s (x3 over 5m20s)   kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-911532/192.168.39.179
	Start Time:       Mon, 29 Sep 2025 10:23:20 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8z2x6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-8z2x6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  6m3s                default-scheduler  Successfully assigned default/task-pv-pod to addons-911532
	  Warning  Failed     72s (x3 over 5m)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     72s (x3 over 5m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    32s (x5 over 5m)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     32s (x5 over 5m)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    20s (x4 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g6jzv (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-g6jzv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-8bg4m" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xljfq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-911532 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-8bg4m ingress-nginx-admission-patch-xljfq: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-911532 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-911532 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-911532 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.957809453s)
--- FAIL: TestAddons/parallel/CSI (389.81s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (366.32s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-911532 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-911532 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-911532 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-911532 -n addons-911532
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-911532 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-911532 logs -n 25: (1.433621509s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ delete  │ -p download-only-910458                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-910458 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ start   │ -o=json --download-only -p download-only-452531 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                │ download-only-452531 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ delete  │ -p download-only-452531                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-452531 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ delete  │ -p download-only-910458                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-910458 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ delete  │ -p download-only-452531                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-452531 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ start   │ --download-only -p binary-mirror-757361 --alsologtostderr --binary-mirror http://127.0.0.1:43621 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-757361 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │                     │
	│ delete  │ -p binary-mirror-757361                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-757361 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ addons  │ disable dashboard -p addons-911532                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │                     │
	│ addons  │ enable dashboard -p addons-911532                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │                     │
	│ start   │ -p addons-911532 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-911532 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-911532 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ enable headlamp -p addons-911532 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-911532 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-911532 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-911532 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ ip      │ addons-911532 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ addons  │ addons-911532 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ addons  │ addons-911532 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:24 UTC │
	│ addons  │ addons-911532 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │ 29 Sep 25 10:24 UTC │
	│ addons  │ addons-911532 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │ 29 Sep 25 10:24 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-911532                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │ 29 Sep 25 10:24 UTC │
	│ addons  │ addons-911532 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-911532        │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │ 29 Sep 25 10:24 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:19:49
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:19:49.657940    8330 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:19:49.658280    8330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:19:49.658293    8330 out.go:374] Setting ErrFile to fd 2...
	I0929 10:19:49.658299    8330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:19:49.658774    8330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
	I0929 10:19:49.659724    8330 out.go:368] Setting JSON to false
	I0929 10:19:49.660569    8330 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":135,"bootTime":1759141055,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:19:49.660646    8330 start.go:140] virtualization: kvm guest
	I0929 10:19:49.662346    8330 out.go:179] * [addons-911532] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:19:49.663847    8330 notify.go:220] Checking for updates...
	I0929 10:19:49.663868    8330 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 10:19:49.665023    8330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:19:49.666170    8330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 10:19:49.667465    8330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 10:19:49.668605    8330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:19:49.669820    8330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:19:49.670997    8330 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:19:49.700388    8330 out.go:179] * Using the kvm2 driver based on user configuration
	I0929 10:19:49.701463    8330 start.go:304] selected driver: kvm2
	I0929 10:19:49.701479    8330 start.go:924] validating driver "kvm2" against <nil>
	I0929 10:19:49.701491    8330 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:19:49.702129    8330 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:19:49.702205    8330 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21657-3816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 10:19:49.715255    8330 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 10:19:49.715283    8330 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21657-3816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 10:19:49.729163    8330 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 10:19:49.729198    8330 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:19:49.729518    8330 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:19:49.729559    8330 cni.go:84] Creating CNI manager for ""
	I0929 10:19:49.729599    8330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 10:19:49.729607    8330 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 10:19:49.729659    8330 start.go:348] cluster config:
	{Name:addons-911532 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-911532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:19:49.729764    8330 iso.go:125] acquiring lock: {Name:mk6893cf08d5f5d64906f89556bbcb1c3b23df2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:19:49.731718    8330 out.go:179] * Starting "addons-911532" primary control-plane node in "addons-911532" cluster
	I0929 10:19:49.732842    8330 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:19:49.732885    8330 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 10:19:49.732892    8330 cache.go:58] Caching tarball of preloaded images
	I0929 10:19:49.732961    8330 preload.go:172] Found /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 10:19:49.732971    8330 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 10:19:49.733271    8330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/config.json ...
	I0929 10:19:49.733296    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/config.json: {Name:mk3b1c31f51191d700bb099fb8f771ac33c82a62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:19:49.733457    8330 start.go:360] acquireMachinesLock for addons-911532: {Name:mk5aa1ba007c5e25969fbfeac9bb0aa5318bfa89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0929 10:19:49.733506    8330 start.go:364] duration metric: took 34.004µs to acquireMachinesLock for "addons-911532"
	I0929 10:19:49.733524    8330 start.go:93] Provisioning new machine with config: &{Name:addons-911532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:addons-911532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 10:19:49.733580    8330 start.go:125] createHost starting for "" (driver="kvm2")
	I0929 10:19:49.735166    8330 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0929 10:19:49.735279    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:19:49.735315    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:19:49.747570    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34227
	I0929 10:19:49.748034    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:19:49.748606    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:19:49.748628    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:19:49.748980    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:19:49.749155    8330 main.go:141] libmachine: (addons-911532) Calling .GetMachineName
	I0929 10:19:49.749278    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:19:49.749427    8330 start.go:159] libmachine.API.Create for "addons-911532" (driver="kvm2")
	I0929 10:19:49.749454    8330 client.go:168] LocalClient.Create starting
	I0929 10:19:49.749497    8330 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem
	I0929 10:19:49.897019    8330 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem
	I0929 10:19:49.971089    8330 main.go:141] libmachine: Running pre-create checks...
	I0929 10:19:49.971109    8330 main.go:141] libmachine: (addons-911532) Calling .PreCreateCheck
	I0929 10:19:49.971568    8330 main.go:141] libmachine: (addons-911532) Calling .GetConfigRaw
	I0929 10:19:49.971999    8330 main.go:141] libmachine: Creating machine...
	I0929 10:19:49.972014    8330 main.go:141] libmachine: (addons-911532) Calling .Create
	I0929 10:19:49.972178    8330 main.go:141] libmachine: (addons-911532) creating domain...
	I0929 10:19:49.972189    8330 main.go:141] libmachine: (addons-911532) creating network...
	I0929 10:19:49.973497    8330 main.go:141] libmachine: (addons-911532) DBG | found existing default network
	I0929 10:19:49.973637    8330 main.go:141] libmachine: (addons-911532) DBG | <network>
	I0929 10:19:49.973653    8330 main.go:141] libmachine: (addons-911532) DBG |   <name>default</name>
	I0929 10:19:49.973661    8330 main.go:141] libmachine: (addons-911532) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0929 10:19:49.973670    8330 main.go:141] libmachine: (addons-911532) DBG |   <forward mode='nat'>
	I0929 10:19:49.973677    8330 main.go:141] libmachine: (addons-911532) DBG |     <nat>
	I0929 10:19:49.973688    8330 main.go:141] libmachine: (addons-911532) DBG |       <port start='1024' end='65535'/>
	I0929 10:19:49.973700    8330 main.go:141] libmachine: (addons-911532) DBG |     </nat>
	I0929 10:19:49.973706    8330 main.go:141] libmachine: (addons-911532) DBG |   </forward>
	I0929 10:19:49.973715    8330 main.go:141] libmachine: (addons-911532) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0929 10:19:49.973722    8330 main.go:141] libmachine: (addons-911532) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0929 10:19:49.973731    8330 main.go:141] libmachine: (addons-911532) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0929 10:19:49.973740    8330 main.go:141] libmachine: (addons-911532) DBG |     <dhcp>
	I0929 10:19:49.973749    8330 main.go:141] libmachine: (addons-911532) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0929 10:19:49.973765    8330 main.go:141] libmachine: (addons-911532) DBG |     </dhcp>
	I0929 10:19:49.973776    8330 main.go:141] libmachine: (addons-911532) DBG |   </ip>
	I0929 10:19:49.973780    8330 main.go:141] libmachine: (addons-911532) DBG | </network>
	I0929 10:19:49.973787    8330 main.go:141] libmachine: (addons-911532) DBG | 
	I0929 10:19:49.974334    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:49.974184    8358 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000200dd0}
	I0929 10:19:49.974373    8330 main.go:141] libmachine: (addons-911532) DBG | defining private network:
	I0929 10:19:49.974397    8330 main.go:141] libmachine: (addons-911532) DBG | 
	I0929 10:19:49.974420    8330 main.go:141] libmachine: (addons-911532) DBG | <network>
	I0929 10:19:49.974439    8330 main.go:141] libmachine: (addons-911532) DBG |   <name>mk-addons-911532</name>
	I0929 10:19:49.974466    8330 main.go:141] libmachine: (addons-911532) DBG |   <dns enable='no'/>
	I0929 10:19:49.974489    8330 main.go:141] libmachine: (addons-911532) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0929 10:19:49.974503    8330 main.go:141] libmachine: (addons-911532) DBG |     <dhcp>
	I0929 10:19:49.974515    8330 main.go:141] libmachine: (addons-911532) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0929 10:19:49.974525    8330 main.go:141] libmachine: (addons-911532) DBG |     </dhcp>
	I0929 10:19:49.974531    8330 main.go:141] libmachine: (addons-911532) DBG |   </ip>
	I0929 10:19:49.974536    8330 main.go:141] libmachine: (addons-911532) DBG | </network>
	I0929 10:19:49.974542    8330 main.go:141] libmachine: (addons-911532) DBG | 
	I0929 10:19:49.980371    8330 main.go:141] libmachine: (addons-911532) DBG | creating private network mk-addons-911532 192.168.39.0/24...
	I0929 10:19:50.045524    8330 main.go:141] libmachine: (addons-911532) DBG | private network mk-addons-911532 192.168.39.0/24 created
	I0929 10:19:50.045754    8330 main.go:141] libmachine: (addons-911532) DBG | <network>
	I0929 10:19:50.045775    8330 main.go:141] libmachine: (addons-911532) DBG |   <name>mk-addons-911532</name>
	I0929 10:19:50.045788    8330 main.go:141] libmachine: (addons-911532) setting up store path in /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532 ...
	I0929 10:19:50.045815    8330 main.go:141] libmachine: (addons-911532) DBG |   <uuid>1948f630-90e3-4c16-adbb-718b17efed7e</uuid>
	I0929 10:19:50.045832    8330 main.go:141] libmachine: (addons-911532) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I0929 10:19:50.045851    8330 main.go:141] libmachine: (addons-911532) building disk image from file:///home/jenkins/minikube-integration/21657-3816/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0929 10:19:50.045876    8330 main.go:141] libmachine: (addons-911532) DBG |   <mac address='52:54:00:30:e5:b4'/>
	I0929 10:19:50.045894    8330 main.go:141] libmachine: (addons-911532) DBG |   <dns enable='no'/>
	I0929 10:19:50.045921    8330 main.go:141] libmachine: (addons-911532) Downloading /home/jenkins/minikube-integration/21657-3816/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21657-3816/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I0929 10:19:50.045936    8330 main.go:141] libmachine: (addons-911532) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0929 10:19:50.045954    8330 main.go:141] libmachine: (addons-911532) DBG |     <dhcp>
	I0929 10:19:50.045966    8330 main.go:141] libmachine: (addons-911532) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0929 10:19:50.045976    8330 main.go:141] libmachine: (addons-911532) DBG |     </dhcp>
	I0929 10:19:50.045985    8330 main.go:141] libmachine: (addons-911532) DBG |   </ip>
	I0929 10:19:50.045994    8330 main.go:141] libmachine: (addons-911532) DBG | </network>
	I0929 10:19:50.046009    8330 main.go:141] libmachine: (addons-911532) DBG | 
	I0929 10:19:50.046032    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:50.045748    8358 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 10:19:50.297023    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:50.296839    8358 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa...
	I0929 10:19:50.440022    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:50.439881    8358 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/addons-911532.rawdisk...
	I0929 10:19:50.440071    8330 main.go:141] libmachine: (addons-911532) DBG | Writing magic tar header
	I0929 10:19:50.440088    8330 main.go:141] libmachine: (addons-911532) DBG | Writing SSH key tar header
	I0929 10:19:50.440542    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:50.440479    8358 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532 ...
	I0929 10:19:50.440591    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532
	I0929 10:19:50.440619    8330 main.go:141] libmachine: (addons-911532) setting executable bit set on /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532 (perms=drwx------)
	I0929 10:19:50.440632    8330 main.go:141] libmachine: (addons-911532) setting executable bit set on /home/jenkins/minikube-integration/21657-3816/.minikube/machines (perms=drwxr-xr-x)
	I0929 10:19:50.440640    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21657-3816/.minikube/machines
	I0929 10:19:50.440665    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 10:19:50.440675    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21657-3816
	I0929 10:19:50.440683    8330 main.go:141] libmachine: (addons-911532) setting executable bit set on /home/jenkins/minikube-integration/21657-3816/.minikube (perms=drwxr-xr-x)
	I0929 10:19:50.440696    8330 main.go:141] libmachine: (addons-911532) setting executable bit set on /home/jenkins/minikube-integration/21657-3816 (perms=drwxrwxr-x)
	I0929 10:19:50.440709    8330 main.go:141] libmachine: (addons-911532) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0929 10:19:50.440718    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0929 10:19:50.440730    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home/jenkins
	I0929 10:19:50.440740    8330 main.go:141] libmachine: (addons-911532) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0929 10:19:50.440750    8330 main.go:141] libmachine: (addons-911532) DBG | checking permissions on dir: /home
	I0929 10:19:50.440759    8330 main.go:141] libmachine: (addons-911532) DBG | skipping /home - not owner
	I0929 10:19:50.440766    8330 main.go:141] libmachine: (addons-911532) defining domain...
	I0929 10:19:50.441750    8330 main.go:141] libmachine: (addons-911532) defining domain using XML: 
	I0929 10:19:50.441770    8330 main.go:141] libmachine: (addons-911532) <domain type='kvm'>
	I0929 10:19:50.441785    8330 main.go:141] libmachine: (addons-911532)   <name>addons-911532</name>
	I0929 10:19:50.441795    8330 main.go:141] libmachine: (addons-911532)   <memory unit='MiB'>4096</memory>
	I0929 10:19:50.441807    8330 main.go:141] libmachine: (addons-911532)   <vcpu>2</vcpu>
	I0929 10:19:50.441815    8330 main.go:141] libmachine: (addons-911532)   <features>
	I0929 10:19:50.441823    8330 main.go:141] libmachine: (addons-911532)     <acpi/>
	I0929 10:19:50.441831    8330 main.go:141] libmachine: (addons-911532)     <apic/>
	I0929 10:19:50.441838    8330 main.go:141] libmachine: (addons-911532)     <pae/>
	I0929 10:19:50.441843    8330 main.go:141] libmachine: (addons-911532)   </features>
	I0929 10:19:50.441851    8330 main.go:141] libmachine: (addons-911532)   <cpu mode='host-passthrough'>
	I0929 10:19:50.441858    8330 main.go:141] libmachine: (addons-911532)   </cpu>
	I0929 10:19:50.441866    8330 main.go:141] libmachine: (addons-911532)   <os>
	I0929 10:19:50.441873    8330 main.go:141] libmachine: (addons-911532)     <type>hvm</type>
	I0929 10:19:50.441881    8330 main.go:141] libmachine: (addons-911532)     <boot dev='cdrom'/>
	I0929 10:19:50.441885    8330 main.go:141] libmachine: (addons-911532)     <boot dev='hd'/>
	I0929 10:19:50.441892    8330 main.go:141] libmachine: (addons-911532)     <bootmenu enable='no'/>
	I0929 10:19:50.441896    8330 main.go:141] libmachine: (addons-911532)   </os>
	I0929 10:19:50.441903    8330 main.go:141] libmachine: (addons-911532)   <devices>
	I0929 10:19:50.441907    8330 main.go:141] libmachine: (addons-911532)     <disk type='file' device='cdrom'>
	I0929 10:19:50.441927    8330 main.go:141] libmachine: (addons-911532)       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/boot2docker.iso'/>
	I0929 10:19:50.441934    8330 main.go:141] libmachine: (addons-911532)       <target dev='hdc' bus='scsi'/>
	I0929 10:19:50.441939    8330 main.go:141] libmachine: (addons-911532)       <readonly/>
	I0929 10:19:50.441943    8330 main.go:141] libmachine: (addons-911532)     </disk>
	I0929 10:19:50.441951    8330 main.go:141] libmachine: (addons-911532)     <disk type='file' device='disk'>
	I0929 10:19:50.441959    8330 main.go:141] libmachine: (addons-911532)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0929 10:19:50.441966    8330 main.go:141] libmachine: (addons-911532)       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/addons-911532.rawdisk'/>
	I0929 10:19:50.441973    8330 main.go:141] libmachine: (addons-911532)       <target dev='hda' bus='virtio'/>
	I0929 10:19:50.441978    8330 main.go:141] libmachine: (addons-911532)     </disk>
	I0929 10:19:50.441990    8330 main.go:141] libmachine: (addons-911532)     <interface type='network'>
	I0929 10:19:50.441998    8330 main.go:141] libmachine: (addons-911532)       <source network='mk-addons-911532'/>
	I0929 10:19:50.442004    8330 main.go:141] libmachine: (addons-911532)       <model type='virtio'/>
	I0929 10:19:50.442009    8330 main.go:141] libmachine: (addons-911532)     </interface>
	I0929 10:19:50.442016    8330 main.go:141] libmachine: (addons-911532)     <interface type='network'>
	I0929 10:19:50.442022    8330 main.go:141] libmachine: (addons-911532)       <source network='default'/>
	I0929 10:19:50.442028    8330 main.go:141] libmachine: (addons-911532)       <model type='virtio'/>
	I0929 10:19:50.442033    8330 main.go:141] libmachine: (addons-911532)     </interface>
	I0929 10:19:50.442039    8330 main.go:141] libmachine: (addons-911532)     <serial type='pty'>
	I0929 10:19:50.442044    8330 main.go:141] libmachine: (addons-911532)       <target port='0'/>
	I0929 10:19:50.442050    8330 main.go:141] libmachine: (addons-911532)     </serial>
	I0929 10:19:50.442055    8330 main.go:141] libmachine: (addons-911532)     <console type='pty'>
	I0929 10:19:50.442067    8330 main.go:141] libmachine: (addons-911532)       <target type='serial' port='0'/>
	I0929 10:19:50.442072    8330 main.go:141] libmachine: (addons-911532)     </console>
	I0929 10:19:50.442078    8330 main.go:141] libmachine: (addons-911532)     <rng model='virtio'>
	I0929 10:19:50.442084    8330 main.go:141] libmachine: (addons-911532)       <backend model='random'>/dev/random</backend>
	I0929 10:19:50.442090    8330 main.go:141] libmachine: (addons-911532)     </rng>
	I0929 10:19:50.442094    8330 main.go:141] libmachine: (addons-911532)   </devices>
	I0929 10:19:50.442100    8330 main.go:141] libmachine: (addons-911532) </domain>
	I0929 10:19:50.442106    8330 main.go:141] libmachine: (addons-911532) 
	I0929 10:19:50.449537    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:be:29:87 in network default
	I0929 10:19:50.449973    8330 main.go:141] libmachine: (addons-911532) starting domain...
	I0929 10:19:50.449986    8330 main.go:141] libmachine: (addons-911532) ensuring networks are active...
	I0929 10:19:50.450009    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:50.450701    8330 main.go:141] libmachine: (addons-911532) Ensuring network default is active
	I0929 10:19:50.451007    8330 main.go:141] libmachine: (addons-911532) Ensuring network mk-addons-911532 is active
	I0929 10:19:50.451538    8330 main.go:141] libmachine: (addons-911532) getting domain XML...
	I0929 10:19:50.452379    8330 main.go:141] libmachine: (addons-911532) DBG | starting domain XML:
	I0929 10:19:50.452399    8330 main.go:141] libmachine: (addons-911532) DBG | <domain type='kvm'>
	I0929 10:19:50.452408    8330 main.go:141] libmachine: (addons-911532) DBG |   <name>addons-911532</name>
	I0929 10:19:50.452415    8330 main.go:141] libmachine: (addons-911532) DBG |   <uuid>0c8a2bbd-7687-4c1a-8020-738f402773b8</uuid>
	I0929 10:19:50.452446    8330 main.go:141] libmachine: (addons-911532) DBG |   <memory unit='KiB'>4194304</memory>
	I0929 10:19:50.452469    8330 main.go:141] libmachine: (addons-911532) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I0929 10:19:50.452483    8330 main.go:141] libmachine: (addons-911532) DBG |   <vcpu placement='static'>2</vcpu>
	I0929 10:19:50.452491    8330 main.go:141] libmachine: (addons-911532) DBG |   <os>
	I0929 10:19:50.452498    8330 main.go:141] libmachine: (addons-911532) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0929 10:19:50.452505    8330 main.go:141] libmachine: (addons-911532) DBG |     <boot dev='cdrom'/>
	I0929 10:19:50.452514    8330 main.go:141] libmachine: (addons-911532) DBG |     <boot dev='hd'/>
	I0929 10:19:50.452525    8330 main.go:141] libmachine: (addons-911532) DBG |     <bootmenu enable='no'/>
	I0929 10:19:50.452545    8330 main.go:141] libmachine: (addons-911532) DBG |   </os>
	I0929 10:19:50.452558    8330 main.go:141] libmachine: (addons-911532) DBG |   <features>
	I0929 10:19:50.452564    8330 main.go:141] libmachine: (addons-911532) DBG |     <acpi/>
	I0929 10:19:50.452573    8330 main.go:141] libmachine: (addons-911532) DBG |     <apic/>
	I0929 10:19:50.452589    8330 main.go:141] libmachine: (addons-911532) DBG |     <pae/>
	I0929 10:19:50.452598    8330 main.go:141] libmachine: (addons-911532) DBG |   </features>
	I0929 10:19:50.452605    8330 main.go:141] libmachine: (addons-911532) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0929 10:19:50.452612    8330 main.go:141] libmachine: (addons-911532) DBG |   <clock offset='utc'/>
	I0929 10:19:50.452628    8330 main.go:141] libmachine: (addons-911532) DBG |   <on_poweroff>destroy</on_poweroff>
	I0929 10:19:50.452639    8330 main.go:141] libmachine: (addons-911532) DBG |   <on_reboot>restart</on_reboot>
	I0929 10:19:50.452649    8330 main.go:141] libmachine: (addons-911532) DBG |   <on_crash>destroy</on_crash>
	I0929 10:19:50.452658    8330 main.go:141] libmachine: (addons-911532) DBG |   <devices>
	I0929 10:19:50.452665    8330 main.go:141] libmachine: (addons-911532) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0929 10:19:50.452674    8330 main.go:141] libmachine: (addons-911532) DBG |     <disk type='file' device='cdrom'>
	I0929 10:19:50.452680    8330 main.go:141] libmachine: (addons-911532) DBG |       <driver name='qemu' type='raw'/>
	I0929 10:19:50.452692    8330 main.go:141] libmachine: (addons-911532) DBG |       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/boot2docker.iso'/>
	I0929 10:19:50.452710    8330 main.go:141] libmachine: (addons-911532) DBG |       <target dev='hdc' bus='scsi'/>
	I0929 10:19:50.452726    8330 main.go:141] libmachine: (addons-911532) DBG |       <readonly/>
	I0929 10:19:50.452740    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0929 10:19:50.452748    8330 main.go:141] libmachine: (addons-911532) DBG |     </disk>
	I0929 10:19:50.452760    8330 main.go:141] libmachine: (addons-911532) DBG |     <disk type='file' device='disk'>
	I0929 10:19:50.452768    8330 main.go:141] libmachine: (addons-911532) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0929 10:19:50.452781    8330 main.go:141] libmachine: (addons-911532) DBG |       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/addons-911532.rawdisk'/>
	I0929 10:19:50.452797    8330 main.go:141] libmachine: (addons-911532) DBG |       <target dev='hda' bus='virtio'/>
	I0929 10:19:50.452811    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0929 10:19:50.452820    8330 main.go:141] libmachine: (addons-911532) DBG |     </disk>
	I0929 10:19:50.452832    8330 main.go:141] libmachine: (addons-911532) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0929 10:19:50.452844    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0929 10:19:50.452853    8330 main.go:141] libmachine: (addons-911532) DBG |     </controller>
	I0929 10:19:50.452868    8330 main.go:141] libmachine: (addons-911532) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0929 10:19:50.452882    8330 main.go:141] libmachine: (addons-911532) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0929 10:19:50.452894    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0929 10:19:50.452905    8330 main.go:141] libmachine: (addons-911532) DBG |     </controller>
	I0929 10:19:50.452917    8330 main.go:141] libmachine: (addons-911532) DBG |     <interface type='network'>
	I0929 10:19:50.452928    8330 main.go:141] libmachine: (addons-911532) DBG |       <mac address='52:54:00:96:11:56'/>
	I0929 10:19:50.452937    8330 main.go:141] libmachine: (addons-911532) DBG |       <source network='mk-addons-911532'/>
	I0929 10:19:50.452945    8330 main.go:141] libmachine: (addons-911532) DBG |       <model type='virtio'/>
	I0929 10:19:50.452955    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0929 10:19:50.452975    8330 main.go:141] libmachine: (addons-911532) DBG |     </interface>
	I0929 10:19:50.452983    8330 main.go:141] libmachine: (addons-911532) DBG |     <interface type='network'>
	I0929 10:19:50.452999    8330 main.go:141] libmachine: (addons-911532) DBG |       <mac address='52:54:00:be:29:87'/>
	I0929 10:19:50.453014    8330 main.go:141] libmachine: (addons-911532) DBG |       <source network='default'/>
	I0929 10:19:50.453022    8330 main.go:141] libmachine: (addons-911532) DBG |       <model type='virtio'/>
	I0929 10:19:50.453031    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0929 10:19:50.453042    8330 main.go:141] libmachine: (addons-911532) DBG |     </interface>
	I0929 10:19:50.453053    8330 main.go:141] libmachine: (addons-911532) DBG |     <serial type='pty'>
	I0929 10:19:50.453062    8330 main.go:141] libmachine: (addons-911532) DBG |       <target type='isa-serial' port='0'>
	I0929 10:19:50.453073    8330 main.go:141] libmachine: (addons-911532) DBG |         <model name='isa-serial'/>
	I0929 10:19:50.453081    8330 main.go:141] libmachine: (addons-911532) DBG |       </target>
	I0929 10:19:50.453088    8330 main.go:141] libmachine: (addons-911532) DBG |     </serial>
	I0929 10:19:50.453094    8330 main.go:141] libmachine: (addons-911532) DBG |     <console type='pty'>
	I0929 10:19:50.453106    8330 main.go:141] libmachine: (addons-911532) DBG |       <target type='serial' port='0'/>
	I0929 10:19:50.453114    8330 main.go:141] libmachine: (addons-911532) DBG |     </console>
	I0929 10:19:50.453119    8330 main.go:141] libmachine: (addons-911532) DBG |     <input type='mouse' bus='ps2'/>
	I0929 10:19:50.453131    8330 main.go:141] libmachine: (addons-911532) DBG |     <input type='keyboard' bus='ps2'/>
	I0929 10:19:50.453138    8330 main.go:141] libmachine: (addons-911532) DBG |     <audio id='1' type='none'/>
	I0929 10:19:50.453144    8330 main.go:141] libmachine: (addons-911532) DBG |     <memballoon model='virtio'>
	I0929 10:19:50.453153    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0929 10:19:50.453158    8330 main.go:141] libmachine: (addons-911532) DBG |     </memballoon>
	I0929 10:19:50.453162    8330 main.go:141] libmachine: (addons-911532) DBG |     <rng model='virtio'>
	I0929 10:19:50.453170    8330 main.go:141] libmachine: (addons-911532) DBG |       <backend model='random'>/dev/random</backend>
	I0929 10:19:50.453176    8330 main.go:141] libmachine: (addons-911532) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0929 10:19:50.453193    8330 main.go:141] libmachine: (addons-911532) DBG |     </rng>
	I0929 10:19:50.453213    8330 main.go:141] libmachine: (addons-911532) DBG |   </devices>
	I0929 10:19:50.453227    8330 main.go:141] libmachine: (addons-911532) DBG | </domain>
	I0929 10:19:50.453239    8330 main.go:141] libmachine: (addons-911532) DBG | 
	I0929 10:19:51.804030    8330 main.go:141] libmachine: (addons-911532) waiting for domain to start...
	I0929 10:19:51.805192    8330 main.go:141] libmachine: (addons-911532) domain is now running
	I0929 10:19:51.805217    8330 main.go:141] libmachine: (addons-911532) waiting for IP...
	I0929 10:19:51.805985    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:51.806446    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:51.806469    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:51.806682    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:51.806731    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:51.806690    8358 retry.go:31] will retry after 261.427598ms: waiting for domain to come up
	I0929 10:19:52.070280    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:52.070742    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:52.070767    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:52.070971    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:52.070993    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:52.070958    8358 retry.go:31] will retry after 240.955253ms: waiting for domain to come up
	I0929 10:19:52.313494    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:52.313944    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:52.313967    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:52.314221    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:52.314248    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:52.314183    8358 retry.go:31] will retry after 448.127739ms: waiting for domain to come up
	I0929 10:19:52.763659    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:52.764289    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:52.764319    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:52.764571    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:52.764611    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:52.764572    8358 retry.go:31] will retry after 440.800517ms: waiting for domain to come up
	I0929 10:19:53.207391    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:53.207852    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:53.207875    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:53.208100    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:53.208135    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:53.208089    8358 retry.go:31] will retry after 608.456206ms: waiting for domain to come up
	I0929 10:19:53.817995    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:53.818510    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:53.818534    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:53.818802    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:53.818825    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:53.818782    8358 retry.go:31] will retry after 587.200151ms: waiting for domain to come up
	I0929 10:19:54.407631    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:54.408171    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:54.408193    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:54.408543    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:54.408576    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:54.408497    8358 retry.go:31] will retry after 1.130343319s: waiting for domain to come up
	I0929 10:19:55.540378    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:55.540927    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:55.540953    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:55.541189    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:55.541213    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:55.541166    8358 retry.go:31] will retry after 1.101264298s: waiting for domain to come up
	I0929 10:19:56.643818    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:56.644330    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:56.644369    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:56.644602    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:56.644625    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:56.644570    8358 retry.go:31] will retry after 1.643468675s: waiting for domain to come up
	I0929 10:19:58.290455    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:58.290889    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:58.290912    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:58.291164    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:58.291183    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:58.291128    8358 retry.go:31] will retry after 1.40280966s: waiting for domain to come up
	I0929 10:19:59.695464    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:19:59.695974    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:19:59.695992    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:19:59.696272    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:19:59.696323    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:19:59.696265    8358 retry.go:31] will retry after 1.862603319s: waiting for domain to come up
	I0929 10:20:01.561785    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:01.562380    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:20:01.562407    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:20:01.562655    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:20:01.562683    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:20:01.562634    8358 retry.go:31] will retry after 2.941456391s: waiting for domain to come up
	I0929 10:20:04.507942    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:04.508465    8330 main.go:141] libmachine: (addons-911532) DBG | no network interface addresses found for domain addons-911532 (source=lease)
	I0929 10:20:04.508487    8330 main.go:141] libmachine: (addons-911532) DBG | trying to list again with source=arp
	I0929 10:20:04.508708    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find current IP address of domain addons-911532 in network mk-addons-911532 (interfaces detected: [])
	I0929 10:20:04.508754    8330 main.go:141] libmachine: (addons-911532) DBG | I0929 10:20:04.508692    8358 retry.go:31] will retry after 3.063009242s: waiting for domain to come up
	I0929 10:20:07.575419    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:07.575975    8330 main.go:141] libmachine: (addons-911532) found domain IP: 192.168.39.179
	I0929 10:20:07.575990    8330 main.go:141] libmachine: (addons-911532) reserving static IP address...
	I0929 10:20:07.575998    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has current primary IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:07.576366    8330 main.go:141] libmachine: (addons-911532) DBG | unable to find host DHCP lease matching {name: "addons-911532", mac: "52:54:00:96:11:56", ip: "192.168.39.179"} in network mk-addons-911532
	I0929 10:20:07.774232    8330 main.go:141] libmachine: (addons-911532) DBG | Getting to WaitForSSH function...
	I0929 10:20:07.774263    8330 main.go:141] libmachine: (addons-911532) reserved static IP address 192.168.39.179 for domain addons-911532
	I0929 10:20:07.774309    8330 main.go:141] libmachine: (addons-911532) waiting for SSH...
	I0929 10:20:07.777412    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:07.777949    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:minikube Clientid:01:52:54:00:96:11:56}
	I0929 10:20:07.777974    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:07.778160    8330 main.go:141] libmachine: (addons-911532) DBG | Using SSH client type: external
	I0929 10:20:07.778178    8330 main.go:141] libmachine: (addons-911532) DBG | Using SSH private key: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa (-rw-------)
	I0929 10:20:07.778240    8330 main.go:141] libmachine: (addons-911532) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0929 10:20:07.778264    8330 main.go:141] libmachine: (addons-911532) DBG | About to run SSH command:
	I0929 10:20:07.778276    8330 main.go:141] libmachine: (addons-911532) DBG | exit 0
	I0929 10:20:07.917138    8330 main.go:141] libmachine: (addons-911532) DBG | SSH cmd err, output: <nil>: 
	I0929 10:20:07.917411    8330 main.go:141] libmachine: (addons-911532) domain creation complete
	I0929 10:20:07.917792    8330 main.go:141] libmachine: (addons-911532) Calling .GetConfigRaw
	I0929 10:20:07.918434    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:07.918664    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:07.918846    8330 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0929 10:20:07.918860    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:07.920305    8330 main.go:141] libmachine: Detecting operating system of created instance...
	I0929 10:20:07.920320    8330 main.go:141] libmachine: Waiting for SSH to be available...
	I0929 10:20:07.920325    8330 main.go:141] libmachine: Getting to WaitForSSH function...
	I0929 10:20:07.920330    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:07.922896    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:07.923256    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:07.923281    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:07.923438    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:07.923635    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:07.923781    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:07.923951    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:07.924122    8330 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:07.924327    8330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0929 10:20:07.924337    8330 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0929 10:20:08.032128    8330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 10:20:08.032150    8330 main.go:141] libmachine: Detecting the provisioner...
	I0929 10:20:08.032158    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.035150    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.035650    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.035676    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.035849    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.036023    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.036162    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.036310    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.036503    8330 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:08.036699    8330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0929 10:20:08.036709    8330 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0929 10:20:08.146139    8330 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0929 10:20:08.146218    8330 main.go:141] libmachine: found compatible host: buildroot
	I0929 10:20:08.146225    8330 main.go:141] libmachine: Provisioning with buildroot...
	I0929 10:20:08.146232    8330 main.go:141] libmachine: (addons-911532) Calling .GetMachineName
	I0929 10:20:08.146517    8330 buildroot.go:166] provisioning hostname "addons-911532"
	I0929 10:20:08.146546    8330 main.go:141] libmachine: (addons-911532) Calling .GetMachineName
	I0929 10:20:08.146724    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.149534    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.149903    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.149931    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.150079    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.150261    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.150452    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.150570    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.150709    8330 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:08.150906    8330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0929 10:20:08.150918    8330 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-911532 && echo "addons-911532" | sudo tee /etc/hostname
	I0929 10:20:08.278974    8330 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-911532
	
	I0929 10:20:08.279001    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.282211    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.282657    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.282689    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.282950    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.283137    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.283318    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.283463    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.283602    8330 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:08.283817    8330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0929 10:20:08.283855    8330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-911532' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-911532/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-911532' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 10:20:08.400849    8330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 10:20:08.400874    8330 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21657-3816/.minikube CaCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21657-3816/.minikube}
	I0929 10:20:08.400909    8330 buildroot.go:174] setting up certificates
	I0929 10:20:08.400922    8330 provision.go:84] configureAuth start
	I0929 10:20:08.400933    8330 main.go:141] libmachine: (addons-911532) Calling .GetMachineName
	I0929 10:20:08.401221    8330 main.go:141] libmachine: (addons-911532) Calling .GetIP
	I0929 10:20:08.404488    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.404861    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.404881    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.405105    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.407451    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.407783    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.407808    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.408007    8330 provision.go:143] copyHostCerts
	I0929 10:20:08.408072    8330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/ca.pem (1082 bytes)
	I0929 10:20:08.408347    8330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/cert.pem (1123 bytes)
	I0929 10:20:08.408478    8330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/key.pem (1679 bytes)
	I0929 10:20:08.408562    8330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca-key.pem org=jenkins.addons-911532 san=[127.0.0.1 192.168.39.179 addons-911532 localhost minikube]
	I0929 10:20:08.457469    8330 provision.go:177] copyRemoteCerts
	I0929 10:20:08.457527    8330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 10:20:08.457548    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.460625    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.460962    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.460991    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.461153    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.461390    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.461509    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.461643    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:08.546790    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 10:20:08.577312    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 10:20:08.607181    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 10:20:08.636055    8330 provision.go:87] duration metric: took 235.1207ms to configureAuth
	I0929 10:20:08.636085    8330 buildroot.go:189] setting minikube options for container-runtime
	I0929 10:20:08.636280    8330 config.go:182] Loaded profile config "addons-911532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:20:08.636388    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.639147    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.639482    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.639525    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.639765    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.639937    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.640129    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.640246    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.640408    8330 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:08.640614    8330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0929 10:20:08.640629    8330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 10:20:08.884944    8330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 10:20:08.884967    8330 main.go:141] libmachine: Checking connection to Docker...
	I0929 10:20:08.884977    8330 main.go:141] libmachine: (addons-911532) Calling .GetURL
	I0929 10:20:08.886395    8330 main.go:141] libmachine: (addons-911532) DBG | using libvirt version 8000000
	I0929 10:20:08.888906    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.889281    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.889309    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.889489    8330 main.go:141] libmachine: Docker is up and running!
	I0929 10:20:08.889503    8330 main.go:141] libmachine: Reticulating splines...
	I0929 10:20:08.889509    8330 client.go:171] duration metric: took 19.140044962s to LocalClient.Create
	I0929 10:20:08.889527    8330 start.go:167] duration metric: took 19.140101533s to libmachine.API.Create "addons-911532"
	I0929 10:20:08.889535    8330 start.go:293] postStartSetup for "addons-911532" (driver="kvm2")
	I0929 10:20:08.889546    8330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 10:20:08.889561    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:08.889787    8330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 10:20:08.889810    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.893400    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.893828    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.893850    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.893987    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.894222    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.894407    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.894549    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:08.979409    8330 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 10:20:08.984274    8330 info.go:137] Remote host: Buildroot 2025.02
	I0929 10:20:08.984296    8330 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-3816/.minikube/addons for local assets ...
	I0929 10:20:08.984377    8330 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-3816/.minikube/files for local assets ...
	I0929 10:20:08.984400    8330 start.go:296] duration metric: took 94.85978ms for postStartSetup
	I0929 10:20:08.984429    8330 main.go:141] libmachine: (addons-911532) Calling .GetConfigRaw
	I0929 10:20:08.985063    8330 main.go:141] libmachine: (addons-911532) Calling .GetIP
	I0929 10:20:08.987970    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.988332    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.988371    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.988631    8330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/config.json ...
	I0929 10:20:08.988817    8330 start.go:128] duration metric: took 19.255225953s to createHost
	I0929 10:20:08.988846    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:08.991306    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.991862    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:08.991889    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:08.992056    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:08.992222    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.992394    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:08.992520    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:08.992681    8330 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:08.992946    8330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0929 10:20:08.992962    8330 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0929 10:20:09.100129    8330 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759141209.059279000
	
	I0929 10:20:09.100152    8330 fix.go:216] guest clock: 1759141209.059279000
	I0929 10:20:09.100159    8330 fix.go:229] Guest: 2025-09-29 10:20:09.059279 +0000 UTC Remote: 2025-09-29 10:20:08.988831556 +0000 UTC m=+19.364626106 (delta=70.447444ms)
	I0929 10:20:09.100191    8330 fix.go:200] guest clock delta is within tolerance: 70.447444ms
	I0929 10:20:09.100196    8330 start.go:83] releasing machines lock for "addons-911532", held for 19.366681656s
	I0929 10:20:09.100216    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:09.100557    8330 main.go:141] libmachine: (addons-911532) Calling .GetIP
	I0929 10:20:09.103690    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:09.104033    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:09.104062    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:09.104246    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:09.104743    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:09.104923    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:09.105046    8330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 10:20:09.105097    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:09.105112    8330 ssh_runner.go:195] Run: cat /version.json
	I0929 10:20:09.105130    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:09.108069    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:09.108119    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:09.108464    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:09.108488    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:09.108512    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:09.108534    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:09.108734    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:09.108749    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:09.108912    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:09.108926    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:09.109101    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:09.109113    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:09.109256    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:09.109260    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:09.216417    8330 ssh_runner.go:195] Run: systemctl --version
	I0929 10:20:09.222846    8330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 10:20:09.384636    8330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0929 10:20:09.391852    8330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0929 10:20:09.391906    8330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 10:20:09.412791    8330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0929 10:20:09.412813    8330 start.go:495] detecting cgroup driver to use...
	I0929 10:20:09.412882    8330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 10:20:09.432417    8330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 10:20:09.448433    8330 docker.go:218] disabling cri-docker service (if available) ...
	I0929 10:20:09.448494    8330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 10:20:09.465964    8330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 10:20:09.481975    8330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 10:20:09.629225    8330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 10:20:09.840833    8330 docker.go:234] disabling docker service ...
	I0929 10:20:09.840898    8330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 10:20:09.858103    8330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 10:20:09.872733    8330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 10:20:10.028160    8330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 10:20:10.170725    8330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 10:20:10.186498    8330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 10:20:10.208790    8330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 10:20:10.208840    8330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.221373    8330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 10:20:10.221427    8330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.233339    8330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.245762    8330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.257848    8330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 10:20:10.270858    8330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.283122    8330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.304068    8330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:10.316039    8330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 10:20:10.326321    8330 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0929 10:20:10.326388    8330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0929 10:20:10.348550    8330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 10:20:10.361988    8330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:20:10.507746    8330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 10:20:10.612811    8330 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 10:20:10.612899    8330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 10:20:10.618569    8330 start.go:563] Will wait 60s for crictl version
	I0929 10:20:10.618625    8330 ssh_runner.go:195] Run: which crictl
	I0929 10:20:10.622944    8330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 10:20:10.665514    8330 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0929 10:20:10.665614    8330 ssh_runner.go:195] Run: crio --version
	I0929 10:20:10.694916    8330 ssh_runner.go:195] Run: crio --version
	I0929 10:20:10.724814    8330 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0929 10:20:10.726157    8330 main.go:141] libmachine: (addons-911532) Calling .GetIP
	I0929 10:20:10.729133    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:10.729545    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:10.729575    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:10.729788    8330 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0929 10:20:10.734601    8330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:20:10.750745    8330 kubeadm.go:875] updating cluster {Name:addons-911532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-911
532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 10:20:10.750830    8330 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:20:10.750873    8330 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 10:20:10.786965    8330 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0929 10:20:10.787034    8330 ssh_runner.go:195] Run: which lz4
	I0929 10:20:10.791694    8330 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0929 10:20:10.796598    8330 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0929 10:20:10.796640    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0929 10:20:12.287040    8330 crio.go:462] duration metric: took 1.495381435s to copy over tarball
	I0929 10:20:12.287115    8330 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0929 10:20:13.904851    8330 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.617709548s)
	I0929 10:20:13.904878    8330 crio.go:469] duration metric: took 1.617810623s to extract the tarball
	I0929 10:20:13.904887    8330 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0929 10:20:13.946333    8330 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 10:20:13.991640    8330 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 10:20:13.991663    8330 cache_images.go:85] Images are preloaded, skipping loading
	I0929 10:20:13.991671    8330 kubeadm.go:926] updating node { 192.168.39.179 8443 v1.34.0 crio true true} ...
	I0929 10:20:13.991761    8330 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-911532 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-911532 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 10:20:13.991839    8330 ssh_runner.go:195] Run: crio config
	I0929 10:20:14.038150    8330 cni.go:84] Creating CNI manager for ""
	I0929 10:20:14.038169    8330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 10:20:14.038180    8330 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 10:20:14.038198    8330 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.179 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-911532 NodeName:addons-911532 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 10:20:14.038300    8330 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-911532"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.179"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.179"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 10:20:14.038381    8330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 10:20:14.053651    8330 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 10:20:14.053724    8330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 10:20:14.068031    8330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0929 10:20:14.092020    8330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 10:20:14.116202    8330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I0929 10:20:14.140056    8330 ssh_runner.go:195] Run: grep 192.168.39.179	control-plane.minikube.internal$ /etc/hosts
	I0929 10:20:14.144733    8330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:20:14.159800    8330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:20:14.314527    8330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:20:14.337683    8330 certs.go:68] Setting up /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532 for IP: 192.168.39.179
	I0929 10:20:14.337707    8330 certs.go:194] generating shared ca certs ...
	I0929 10:20:14.337743    8330 certs.go:226] acquiring lock for ca certs: {Name:mk991a8b4541d4c7b4b7bab2e7dfb0450ec66a3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:14.337913    8330 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21657-3816/.minikube/ca.key
	I0929 10:20:14.828624    8330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt ...
	I0929 10:20:14.828656    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt: {Name:mk605d19c615ec63bb49553d32d16a9968996447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:14.828869    8330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-3816/.minikube/ca.key ...
	I0929 10:20:14.828887    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/ca.key: {Name:mk116fbaf9146e252d64c98b19fb4d5d877a65f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:14.828995    8330 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.key
	I0929 10:20:15.061750    8330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.crt ...
	I0929 10:20:15.061779    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.crt: {Name:mk3eeeaec93a3e580abc1a0f8721c39cfd08ef60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.061960    8330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.key ...
	I0929 10:20:15.061975    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.key: {Name:mkc397709470903133ba0b5a62b9ca66bd0144de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.062076    8330 certs.go:256] generating profile certs ...
	I0929 10:20:15.062154    8330 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.key
	I0929 10:20:15.062173    8330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt with IP's: []
	I0929 10:20:15.253281    8330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt ...
	I0929 10:20:15.253313    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: {Name:mkb6d93d9208f1e65858ef821a0bf2997c10f2f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.253506    8330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.key ...
	I0929 10:20:15.253523    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.key: {Name:mk3162bfdf768dab29342cf9830ff9fd4702cb96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.253628    8330 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.key.bf65b89f
	I0929 10:20:15.253656    8330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.crt.bf65b89f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.179]
	I0929 10:20:15.479023    8330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.crt.bf65b89f ...
	I0929 10:20:15.479053    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.crt.bf65b89f: {Name:mkae8e94bfacd54df10c2599ebed7801d300337d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.479223    8330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.key.bf65b89f ...
	I0929 10:20:15.479241    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.key.bf65b89f: {Name:mk28de5248c1f787c9e307292da7671529b3c8bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.479345    8330 certs.go:381] copying /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.crt.bf65b89f -> /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.crt
	I0929 10:20:15.479457    8330 certs.go:385] copying /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.key.bf65b89f -> /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.key
	I0929 10:20:15.479530    8330 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.key
	I0929 10:20:15.479554    8330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.crt with IP's: []
	I0929 10:20:15.890186    8330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.crt ...
	I0929 10:20:15.890217    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.crt: {Name:mk8d6457a0876ed0180e350f3cff3f286feaeb73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.890408    8330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.key ...
	I0929 10:20:15.890424    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.key: {Name:mk5fa1c5bb7ab27f1723ebd353f821745dcf151a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:15.890613    8330 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 10:20:15.890663    8330 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem (1082 bytes)
	I0929 10:20:15.890698    8330 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem (1123 bytes)
	I0929 10:20:15.890741    8330 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/key.pem (1679 bytes)
	I0929 10:20:15.891316    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 10:20:15.938903    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 10:20:15.978982    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 10:20:16.009727    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 10:20:16.039344    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 10:20:16.070479    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 10:20:16.101539    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 10:20:16.131091    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 10:20:16.161171    8330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 10:20:16.190550    8330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 10:20:16.210923    8330 ssh_runner.go:195] Run: openssl version
	I0929 10:20:16.217450    8330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 10:20:16.231199    8330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:20:16.236531    8330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:20:16.236589    8330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:20:16.244248    8330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 10:20:16.258217    8330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 10:20:16.263250    8330 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 10:20:16.263302    8330 kubeadm.go:392] StartCluster: {Name:addons-911532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-911532
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:20:16.263401    8330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 10:20:16.263469    8330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 10:20:16.311031    8330 cri.go:89] found id: ""
	I0929 10:20:16.311136    8330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 10:20:16.324180    8330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 10:20:16.335996    8330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 10:20:16.348491    8330 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 10:20:16.348510    8330 kubeadm.go:157] found existing configuration files:
	
	I0929 10:20:16.348558    8330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 10:20:16.359693    8330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 10:20:16.359754    8330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 10:20:16.371848    8330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 10:20:16.382965    8330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 10:20:16.383055    8330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 10:20:16.395004    8330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 10:20:16.405764    8330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 10:20:16.405833    8330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 10:20:16.417554    8330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 10:20:16.428340    8330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 10:20:16.428405    8330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 10:20:16.439786    8330 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0929 10:20:16.601410    8330 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 10:20:29.233520    8330 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 10:20:29.233611    8330 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 10:20:29.233698    8330 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 10:20:29.233818    8330 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 10:20:29.233926    8330 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 10:20:29.233987    8330 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 10:20:29.236675    8330 out.go:252]   - Generating certificates and keys ...
	I0929 10:20:29.236749    8330 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 10:20:29.236804    8330 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 10:20:29.236891    8330 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 10:20:29.236989    8330 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 10:20:29.237083    8330 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 10:20:29.237156    8330 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 10:20:29.237245    8330 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 10:20:29.237406    8330 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-911532 localhost] and IPs [192.168.39.179 127.0.0.1 ::1]
	I0929 10:20:29.237472    8330 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 10:20:29.237610    8330 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-911532 localhost] and IPs [192.168.39.179 127.0.0.1 ::1]
	I0929 10:20:29.237672    8330 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 10:20:29.237726    8330 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 10:20:29.237792    8330 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 10:20:29.237868    8330 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 10:20:29.237928    8330 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 10:20:29.237983    8330 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 10:20:29.238037    8330 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 10:20:29.238094    8330 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 10:20:29.238141    8330 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 10:20:29.238212    8330 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 10:20:29.238272    8330 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 10:20:29.239488    8330 out.go:252]   - Booting up control plane ...
	I0929 10:20:29.239556    8330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 10:20:29.239621    8330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 10:20:29.239677    8330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 10:20:29.239796    8330 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 10:20:29.239908    8330 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 10:20:29.240017    8330 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 10:20:29.240091    8330 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 10:20:29.240132    8330 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 10:20:29.240245    8330 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 10:20:29.240338    8330 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 10:20:29.240414    8330 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.500993452s
	I0929 10:20:29.240491    8330 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 10:20:29.240576    8330 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.179:8443/livez
	I0929 10:20:29.240647    8330 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 10:20:29.240713    8330 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 10:20:29.240773    8330 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.605979769s
	I0929 10:20:29.240827    8330 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.265600399s
	I0929 10:20:29.240895    8330 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.001411979s
	I0929 10:20:29.241002    8330 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 10:20:29.241131    8330 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 10:20:29.241217    8330 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 10:20:29.241415    8330 kubeadm.go:310] [mark-control-plane] Marking the node addons-911532 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 10:20:29.241473    8330 kubeadm.go:310] [bootstrap-token] Using token: xpmnvs.em3s359nhdig9yyg
	I0929 10:20:29.243962    8330 out.go:252]   - Configuring RBAC rules ...
	I0929 10:20:29.244057    8330 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 10:20:29.244129    8330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 10:20:29.244271    8330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 10:20:29.244454    8330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 10:20:29.244608    8330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 10:20:29.244721    8330 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 10:20:29.244831    8330 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 10:20:29.244870    8330 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 10:20:29.244921    8330 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 10:20:29.244927    8330 kubeadm.go:310] 
	I0929 10:20:29.244982    8330 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 10:20:29.244987    8330 kubeadm.go:310] 
	I0929 10:20:29.245051    8330 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 10:20:29.245057    8330 kubeadm.go:310] 
	I0929 10:20:29.245078    8330 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 10:20:29.245167    8330 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 10:20:29.245249    8330 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 10:20:29.245259    8330 kubeadm.go:310] 
	I0929 10:20:29.245332    8330 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 10:20:29.245343    8330 kubeadm.go:310] 
	I0929 10:20:29.245425    8330 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 10:20:29.245437    8330 kubeadm.go:310] 
	I0929 10:20:29.245517    8330 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 10:20:29.245623    8330 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 10:20:29.245684    8330 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 10:20:29.245691    8330 kubeadm.go:310] 
	I0929 10:20:29.245784    8330 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 10:20:29.245882    8330 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 10:20:29.245889    8330 kubeadm.go:310] 
	I0929 10:20:29.245989    8330 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xpmnvs.em3s359nhdig9yyg \
	I0929 10:20:29.246119    8330 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fdcfa3247e581ebf0f11f1ff8ec879a8ec01cf6ce10faea278bc7fcbbc98f689 \
	I0929 10:20:29.246143    8330 kubeadm.go:310] 	--control-plane 
	I0929 10:20:29.246149    8330 kubeadm.go:310] 
	I0929 10:20:29.246228    8330 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 10:20:29.246239    8330 kubeadm.go:310] 
	I0929 10:20:29.246310    8330 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xpmnvs.em3s359nhdig9yyg \
	I0929 10:20:29.246451    8330 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fdcfa3247e581ebf0f11f1ff8ec879a8ec01cf6ce10faea278bc7fcbbc98f689 
	I0929 10:20:29.246468    8330 cni.go:84] Creating CNI manager for ""
	I0929 10:20:29.246477    8330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 10:20:29.248668    8330 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 10:20:29.249832    8330 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 10:20:29.264165    8330 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 10:20:29.287307    8330 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 10:20:29.287371    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:29.287441    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-911532 minikube.k8s.io/updated_at=2025_09_29T10_20_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c703192fb7638284bed1945941837d6f5d9e8170 minikube.k8s.io/name=addons-911532 minikube.k8s.io/primary=true
	I0929 10:20:29.333982    8330 ops.go:34] apiserver oom_adj: -16
	I0929 10:20:29.443148    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:29.943547    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:30.443943    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:30.944035    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:31.443398    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:31.943338    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:32.443329    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:32.944216    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:33.443626    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:33.943212    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:34.443454    8330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:34.577904    8330 kubeadm.go:1105] duration metric: took 5.290578825s to wait for elevateKubeSystemPrivileges
	I0929 10:20:34.577946    8330 kubeadm.go:394] duration metric: took 18.314646355s to StartCluster
	I0929 10:20:34.577972    8330 settings.go:142] acquiring lock: {Name:mkbd44ffc9a24198fd299896a4cba1c423a77e63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:34.578089    8330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 10:20:34.578570    8330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/kubeconfig: {Name:mka4c30ad2429731194076d58cd88072dc744e8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:34.578797    8330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 10:20:34.578808    8330 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 10:20:34.578883    8330 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 10:20:34.578998    8330 config.go:182] Loaded profile config "addons-911532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:20:34.579013    8330 addons.go:69] Setting metrics-server=true in profile "addons-911532"
	I0929 10:20:34.579019    8330 addons.go:69] Setting inspektor-gadget=true in profile "addons-911532"
	I0929 10:20:34.579032    8330 addons.go:238] Setting addon metrics-server=true in "addons-911532"
	I0929 10:20:34.579001    8330 addons.go:69] Setting yakd=true in profile "addons-911532"
	I0929 10:20:34.579051    8330 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-911532"
	I0929 10:20:34.579058    8330 addons.go:69] Setting registry=true in profile "addons-911532"
	I0929 10:20:34.579072    8330 addons.go:69] Setting registry-creds=true in profile "addons-911532"
	I0929 10:20:34.579076    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579083    8330 addons.go:69] Setting ingress=true in profile "addons-911532"
	I0929 10:20:34.579081    8330 addons.go:69] Setting cloud-spanner=true in profile "addons-911532"
	I0929 10:20:34.579094    8330 addons.go:238] Setting addon ingress=true in "addons-911532"
	I0929 10:20:34.579096    8330 addons.go:238] Setting addon registry=true in "addons-911532"
	I0929 10:20:34.579103    8330 addons.go:238] Setting addon cloud-spanner=true in "addons-911532"
	I0929 10:20:34.579073    8330 addons.go:69] Setting default-storageclass=true in profile "addons-911532"
	I0929 10:20:34.579122    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579121    8330 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-911532"
	I0929 10:20:34.579135    8330 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-911532"
	I0929 10:20:34.579139    8330 addons.go:69] Setting ingress-dns=true in profile "addons-911532"
	I0929 10:20:34.579153    8330 addons.go:238] Setting addon ingress-dns=true in "addons-911532"
	I0929 10:20:34.579163    8330 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-911532"
	I0929 10:20:34.579173    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579182    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579066    8330 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-911532"
	I0929 10:20:34.579422    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579042    8330 addons.go:69] Setting storage-provisioner=true in profile "addons-911532"
	I0929 10:20:34.579481    8330 addons.go:238] Setting addon storage-provisioner=true in "addons-911532"
	I0929 10:20:34.579516    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579556    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579584    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579596    8330 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-911532"
	I0929 10:20:34.579608    8330 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-911532"
	I0929 10:20:34.579617    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.579621    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579642    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.579645    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579680    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.579042    8330 addons.go:238] Setting addon inspektor-gadget=true in "addons-911532"
	I0929 10:20:34.579704    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579864    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579866    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579902    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.579927    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.579956    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.579976    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.580024    8330 addons.go:69] Setting volcano=true in profile "addons-911532"
	I0929 10:20:34.579130    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.580036    8330 addons.go:238] Setting addon volcano=true in "addons-911532"
	I0929 10:20:34.580046    8330 addons.go:69] Setting volumesnapshots=true in profile "addons-911532"
	I0929 10:20:34.580056    8330 addons.go:238] Setting addon volumesnapshots=true in "addons-911532"
	I0929 10:20:34.579063    8330 addons.go:238] Setting addon yakd=true in "addons-911532"
	I0929 10:20:34.579586    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.580102    8330 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-911532"
	I0929 10:20:34.579127    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.580205    8330 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-911532"
	I0929 10:20:34.579104    8330 addons.go:238] Setting addon registry-creds=true in "addons-911532"
	I0929 10:20:34.580465    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.580663    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.580700    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.579074    8330 addons.go:69] Setting gcp-auth=true in profile "addons-911532"
	I0929 10:20:34.580761    8330 mustload.go:65] Loading cluster: addons-911532
	I0929 10:20:34.580485    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.580518    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.581600    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.581630    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.580542    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.580556    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.582054    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.582079    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.582213    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.582242    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.582457    8330 out.go:179] * Verifying Kubernetes components...
	I0929 10:20:34.580566    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.580580    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.580599    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.582793    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.584547    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.584595    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.586549    8330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:20:34.587657    8330 config.go:182] Loaded profile config "addons-911532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:20:34.587871    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.587947    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.588033    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.588105    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.589680    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.589749    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.611209    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33569
	I0929 10:20:34.619982    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39891
	I0929 10:20:34.620045    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33517
	I0929 10:20:34.620051    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0929 10:20:34.619982    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.620679    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.620992    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.621009    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.621801    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.621914    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46753
	I0929 10:20:34.621956    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37655
	I0929 10:20:34.622631    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.622650    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.623029    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.623106    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35273
	I0929 10:20:34.623707    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.623823    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.623840    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.623952    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.623963    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.624510    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.624527    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.624583    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.624625    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.625263    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.625789    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.625829    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.626174    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.626661    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.626678    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.627096    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.627432    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.627595    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.627607    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.627652    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.627682    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.627733    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.627744    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.628166    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.628190    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.628220    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.628314    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.628759    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.628788    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.629020    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.629055    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.631879    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37641
	I0929 10:20:34.632376    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.632705    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46009
	I0929 10:20:34.633030    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.633048    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.633193    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.633230    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.633267    8330 addons.go:238] Setting addon default-storageclass=true in "addons-911532"
	I0929 10:20:34.633652    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.633800    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.634170    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.634207    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.635813    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.635852    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.636152    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.636325    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0929 10:20:34.636872    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.637313    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.637328    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.642530    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.642548    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.642626    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.642679    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35179
	I0929 10:20:34.643872    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.644142    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.644246    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.644288    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.645594    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36547
	I0929 10:20:34.648922    8330 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-911532"
	I0929 10:20:34.649021    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.649433    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.649468    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.648943    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.652866    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.653073    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.653088    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.653480    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34279
	I0929 10:20:34.653596    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.654397    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.654434    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.654714    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38051
	I0929 10:20:34.654720    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.654766    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.654784    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.655230    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.655412    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.655448    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.655888    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.655923    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.656194    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.656228    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.656428    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.657115    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.657140    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.660741    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.661324    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.661373    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.664929    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.665442    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33747
	I0929 10:20:34.665663    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45979
	I0929 10:20:34.666958    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.666976    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.667484    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.667511    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.667663    8330 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 10:20:34.668039    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.668186    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.668825    8330 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 10:20:34.668844    8330 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 10:20:34.668864    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.670363    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38703
	I0929 10:20:34.670492    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45327
	I0929 10:20:34.670589    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.670638    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.670685    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44323
	I0929 10:20:34.670850    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.671069    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.673465    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34519
	I0929 10:20:34.673527    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:34.674063    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.674096    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.674977    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34045
	I0929 10:20:34.675676    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.676230    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.676248    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.676307    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.676719    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.677275    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.677317    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.677523    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.678840    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.678928    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.678990    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.679041    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.679058    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.679469    8330 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 10:20:34.680842    8330 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:20:34.680869    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 10:20:34.680887    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.682698    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.682719    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
	I0929 10:20:34.682798    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.682814    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.682799    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.682873    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.682971    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.683566    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.683632    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.683639    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.683654    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.683726    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.683774    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.683785    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.683941    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.684015    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.684089    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.684161    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.684441    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.684455    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.684741    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.684802    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.684849    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.684894    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43831
	I0929 10:20:34.685225    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.685265    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.685603    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.685635    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.685757    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.686288    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.686328    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.687002    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.687029    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.690223    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.693652    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.693704    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.698952    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45605
	I0929 10:20:34.698970    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39949
	I0929 10:20:34.698972    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.699009    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.698972    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.699052    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.699072    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.698956    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.699670    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.699705    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.700063    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.700153    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.700166    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.700208    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.700218    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.700345    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.700526    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.701231    8330 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 10:20:34.701911    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.701977    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.702057    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 10:20:34.702426    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.702172    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.702205    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.702855    8330 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:20:34.703378    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 10:20:34.703399    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.704803    8330 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 10:20:34.704895    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 10:20:34.705477    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44235
	I0929 10:20:34.705978    8330 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 10:20:34.705994    8330 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 10:20:34.706011    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.708737    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 10:20:34.709962    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 10:20:34.711332    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 10:20:34.711651    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.711697    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0929 10:20:34.711872    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42395
	I0929 10:20:34.711919    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.712201    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.712421    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.712506    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.712521    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.712998    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.713202    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.713218    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.713266    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.713854    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
	I0929 10:20:34.713974    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.714080    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.714091    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.714089    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 10:20:34.714230    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.715079    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.715142    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.715220    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.715297    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.715368    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.715956    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.716009    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.716125    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.716175    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40095
	I0929 10:20:34.716205    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.716294    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.716343    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36303
	I0929 10:20:34.716378    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.716486    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.716488    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.716500    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 10:20:34.716534    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.716848    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.716857    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.717024    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.717298    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.717928    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.718122    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0929 10:20:34.718584    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.718977    8330 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 10:20:34.719471    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.719488    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.719792    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.719808    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.719952    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 10:20:34.720195    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.719597    8330 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 10:20:34.720392    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.720598    8330 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:20:34.720616    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 10:20:34.720632    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.720636    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.720067    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.720145    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.720684    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.721147    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.721261    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 10:20:34.721272    8330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 10:20:34.721286    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.721295    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.721304    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.721329    8330 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:20:34.721337    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.721343    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 10:20:34.721370    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.721378    8330 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 10:20:34.721386    8330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 10:20:34.721397    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.722081    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.722147    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.722188    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.722501    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.722717    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.722815    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40771
	I0929 10:20:34.723931    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.724477    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.724627    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45741
	I0929 10:20:34.724682    8330 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 10:20:34.725137    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I0929 10:20:34.725214    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.725408    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.725474    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.725712    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.725963    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.725985    8330 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 10:20:34.726200    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.726227    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.726409    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.726429    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.726650    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.726822    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.727082    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.727129    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.727533    8330 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 10:20:34.727533    8330 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 10:20:34.727652    8330 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 10:20:34.727676    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.728686    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:34.728766    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:34.729230    8330 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 10:20:34.729245    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 10:20:34.729261    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.730397    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.730781    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.731393    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.731820    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.732216    8330 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 10:20:34.732339    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:34.732658    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:34.732406    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.732428    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.732749    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.732857    8330 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 10:20:34.733003    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:34.733015    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.733085    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:34.733094    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:34.733106    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:34.733113    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:34.733174    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.733327    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.733400    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:34.733408    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	W0929 10:20:34.733499    8330 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0929 10:20:34.733798    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.733801    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.733912    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.734054    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.734644    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.734341    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.734688    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.734709    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.734754    8330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 10:20:34.734762    8330 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 10:20:34.734774    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.735299    8330 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:20:34.735491    8330 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 10:20:34.735536    8330 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 10:20:34.735635    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.735893    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.736417    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.736504    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.736524    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.736551    8330 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 10:20:34.736683    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.736733    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 10:20:34.736746    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.736864    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.737094    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.737173    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.737498    8330 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:20:34.737512    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 10:20:34.737529    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.738046    8330 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:20:34.738231    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.738250    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.739179    8330 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:20:34.739195    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 10:20:34.739209    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.739655    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.740103    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.740604    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.740967    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.740970    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.741030    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.741379    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.741614    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.741632    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.741788    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.742109    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.742129    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.742150    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.742161    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.742421    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.742535    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.742802    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.742930    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.743127    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.743456    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.743674    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.743699    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.743973    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.744132    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.744133    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.744187    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.744304    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.744456    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.744462    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.744601    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.744706    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.744809    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.745109    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.745491    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.745518    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.745796    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.745998    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.746170    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.746303    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.746890    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.747330    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.747407    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.747570    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.747612    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37067
	I0929 10:20:34.747719    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.747882    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.747955    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:34.748060    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:34.748397    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:34.748421    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:34.748773    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:34.749012    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:34.750457    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:34.752008    8330 out.go:179]   - Using image docker.io/busybox:stable
	I0929 10:20:34.753202    8330 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 10:20:34.754342    8330 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:20:34.754377    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 10:20:34.754395    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:34.757852    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.758255    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:34.758330    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:34.758551    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:34.758744    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:34.758881    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:34.759050    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	W0929 10:20:35.042687    8330 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47666->192.168.39.179:22: read: connection reset by peer
	I0929 10:20:35.042728    8330 retry.go:31] will retry after 227.252154ms: ssh: handshake failed: read tcp 192.168.39.1:47666->192.168.39.179:22: read: connection reset by peer
	W0929 10:20:35.046188    8330 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47680->192.168.39.179:22: read: connection reset by peer
	I0929 10:20:35.046216    8330 retry.go:31] will retry after 146.732464ms: ssh: handshake failed: read tcp 192.168.39.1:47680->192.168.39.179:22: read: connection reset by peer
	I0929 10:20:35.540872    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:20:35.579899    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:20:35.660053    8330 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 10:20:35.660086    8330 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 10:20:35.675711    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 10:20:35.683986    8330 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:35.684010    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 10:20:35.740542    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 10:20:35.740565    8330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 10:20:35.747876    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:20:35.761273    8330 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 10:20:35.761301    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 10:20:35.864047    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:20:35.966173    8330 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.387341194s)
	I0929 10:20:35.966224    8330 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.379651941s)
	I0929 10:20:35.966281    8330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:20:35.966363    8330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 10:20:35.991879    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 10:20:36.019637    8330 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 10:20:36.019659    8330 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 10:20:36.122486    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:20:36.211453    8330 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 10:20:36.211479    8330 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 10:20:36.220363    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:20:36.238690    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:36.284452    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:20:36.301479    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 10:20:36.301501    8330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 10:20:36.312324    8330 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 10:20:36.312347    8330 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 10:20:36.401460    8330 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:20:36.401485    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 10:20:36.408098    8330 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 10:20:36.408119    8330 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 10:20:36.602526    8330 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 10:20:36.602552    8330 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 10:20:36.629597    8330 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:20:36.629620    8330 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 10:20:36.659489    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 10:20:36.659518    8330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 10:20:36.760787    8330 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 10:20:36.760817    8330 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 10:20:36.780734    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:20:36.980282    8330 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:20:36.980312    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 10:20:37.019180    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 10:20:37.019209    8330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 10:20:37.067476    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:20:37.210287    8330 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 10:20:37.210314    8330 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 10:20:37.370170    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 10:20:37.370205    8330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 10:20:37.411611    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:20:37.615958    8330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 10:20:37.615977    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 10:20:37.626251    8330 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 10:20:37.626289    8330 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 10:20:37.851163    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.310253621s)
	I0929 10:20:37.851224    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:37.851237    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:37.851589    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:37.851612    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:37.851627    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:37.851636    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:37.851934    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:37.851969    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:37.851975    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:38.121335    8330 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:20:38.121366    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 10:20:38.153983    8330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 10:20:38.154019    8330 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 10:20:38.462249    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:20:38.490038    8330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 10:20:38.490067    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 10:20:38.882899    8330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 10:20:38.882924    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 10:20:39.175979    8330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:20:39.176000    8330 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 10:20:39.522531    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:20:40.536771    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.956838267s)
	I0929 10:20:40.536814    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.536829    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.536835    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.861093026s)
	I0929 10:20:40.536874    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.536892    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.537112    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.537122    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.537133    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.537139    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.537144    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.537149    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.537151    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.537158    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.539079    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.539085    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.539093    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.539101    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.539082    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.539102    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.645111    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.645134    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.645420    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.645437    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.794330    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.046421969s)
	I0929 10:20:40.794394    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794407    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.794407    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.93033074s)
	I0929 10:20:40.794439    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794453    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.794500    8330 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.828203764s)
	I0929 10:20:40.794545    8330 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.828162665s)
	I0929 10:20:40.794560    8330 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0929 10:20:40.794605    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.80268956s)
	I0929 10:20:40.794635    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794647    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.794795    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.794805    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.794814    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794820    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.794832    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.794834    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.794845    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.794854    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794862    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.794873    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.794895    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.794902    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.794910    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794917    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.794917    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.672242746s)
	I0929 10:20:40.794943    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.794952    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.795217    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.795243    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.795265    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.795271    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.795395    8330 node_ready.go:35] waiting up to 6m0s for node "addons-911532" to be "Ready" ...
	I0929 10:20:40.795495    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.795525    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.795533    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.795542    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:40.795549    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:40.795622    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.795630    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.795919    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.795972    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.797514    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:40.797521    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:40.797532    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:40.815143    8330 node_ready.go:49] node "addons-911532" is "Ready"
	I0929 10:20:40.815165    8330 node_ready.go:38] duration metric: took 19.750953ms for node "addons-911532" to be "Ready" ...
	I0929 10:20:40.815177    8330 api_server.go:52] waiting for apiserver process to appear ...
	I0929 10:20:40.815221    8330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:20:41.364748    8330 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-911532" context rescaled to 1 replicas
	I0929 10:20:42.085122    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.864720869s)
	I0929 10:20:42.085215    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:42.085224    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:42.085491    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:42.085509    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:42.085519    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:42.085526    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:42.085859    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:42.085876    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:42.085859    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:42.176567    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.937842433s)
	W0929 10:20:42.176609    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:42.176627    8330 retry.go:31] will retry after 344.433489ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:42.229614    8330 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 10:20:42.229647    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:42.233209    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:42.233765    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:42.233790    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:42.234014    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:42.234217    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:42.234390    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:42.234549    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:42.363888    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:42.363918    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:42.364176    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:42.364191    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:42.402322    8330 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 10:20:42.497253    8330 addons.go:238] Setting addon gcp-auth=true in "addons-911532"
	I0929 10:20:42.497305    8330 host.go:66] Checking if "addons-911532" exists ...
	I0929 10:20:42.497617    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:42.497656    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:42.511982    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34373
	I0929 10:20:42.512604    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:42.513162    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:42.513187    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:42.513517    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:42.514096    8330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:20:42.514143    8330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:20:42.521475    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:42.527839    8330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34943
	I0929 10:20:42.528255    8330 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:20:42.528790    8330 main.go:141] libmachine: Using API Version  1
	I0929 10:20:42.528815    8330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:20:42.529201    8330 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:20:42.529440    8330 main.go:141] libmachine: (addons-911532) Calling .GetState
	I0929 10:20:42.531322    8330 main.go:141] libmachine: (addons-911532) Calling .DriverName
	I0929 10:20:42.531562    8330 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 10:20:42.531583    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHHostname
	I0929 10:20:42.534916    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:42.535403    8330 main.go:141] libmachine: (addons-911532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:11:56", ip: ""} in network mk-addons-911532: {Iface:virbr1 ExpiryTime:2025-09-29 11:20:06 +0000 UTC Type:0 Mac:52:54:00:96:11:56 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-911532 Clientid:01:52:54:00:96:11:56}
	I0929 10:20:42.535429    8330 main.go:141] libmachine: (addons-911532) DBG | domain addons-911532 has defined IP address 192.168.39.179 and MAC address 52:54:00:96:11:56 in network mk-addons-911532
	I0929 10:20:42.535641    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHPort
	I0929 10:20:42.535801    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHKeyPath
	I0929 10:20:42.535982    8330 main.go:141] libmachine: (addons-911532) Calling .GetSSHUsername
	I0929 10:20:42.536112    8330 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/addons-911532/id_rsa Username:docker}
	I0929 10:20:43.911194    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.130428404s)
	I0929 10:20:43.911250    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911264    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911305    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.843789347s)
	I0929 10:20:43.911370    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911387    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911417    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.626934708s)
	I0929 10:20:43.911442    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911459    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911385    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.499749199s)
	I0929 10:20:43.911505    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911516    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.911518    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:43.911520    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911526    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.911535    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911543    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911569    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:43.911624    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.911642    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.911716    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911726    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911755    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.911766    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.911777    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.911784    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.911789    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.911796    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.911805    8330 addons.go:479] Verifying addon registry=true in "addons-911532"
	I0929 10:20:43.911889    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:43.911917    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.913725    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.913735    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:43.913745    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:43.914016    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:43.914032    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:43.914034    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.914043    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.914046    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.914052    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.914056    8330 addons.go:479] Verifying addon metrics-server=true in "addons-911532"
	I0929 10:20:43.914058    8330 addons.go:479] Verifying addon ingress=true in "addons-911532"
	I0929 10:20:43.914108    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:43.914456    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:43.914126    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:43.916045    8330 out.go:179] * Verifying registry addon...
	I0929 10:20:43.916966    8330 out.go:179] * Verifying ingress addon...
	I0929 10:20:43.916970    8330 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-911532 service yakd-dashboard -n yakd-dashboard
	
	I0929 10:20:43.918685    8330 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 10:20:43.919216    8330 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 10:20:43.932029    8330 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 10:20:43.932051    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:43.932389    8330 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 10:20:43.932401    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:44.445321    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:44.455769    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:44.974560    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:44.974637    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:45.197486    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.73519948s)
	W0929 10:20:45.197531    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:20:45.197552    8330 retry.go:31] will retry after 188.758064ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:20:45.197780    8330 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.382549144s)
	I0929 10:20:45.197804    8330 api_server.go:72] duration metric: took 10.618970714s to wait for apiserver process to appear ...
	I0929 10:20:45.197812    8330 api_server.go:88] waiting for apiserver healthz status ...
	I0929 10:20:45.197833    8330 api_server.go:253] Checking apiserver healthz at https://192.168.39.179:8443/healthz ...
	I0929 10:20:45.197777    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.675200772s)
	I0929 10:20:45.197918    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:45.197936    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:45.198196    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:45.198209    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:45.198225    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:45.198240    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:45.198251    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:45.198499    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:45.198512    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:45.198521    8330 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-911532"
	I0929 10:20:45.200264    8330 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 10:20:45.202570    8330 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 10:20:45.239947    8330 api_server.go:279] https://192.168.39.179:8443/healthz returned 200:
	ok
	I0929 10:20:45.262006    8330 api_server.go:141] control plane version: v1.34.0
	I0929 10:20:45.262038    8330 api_server.go:131] duration metric: took 64.218943ms to wait for apiserver health ...
	I0929 10:20:45.262051    8330 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 10:20:45.279433    8330 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 10:20:45.279463    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:45.334344    8330 system_pods.go:59] 20 kube-system pods found
	I0929 10:20:45.334413    8330 system_pods.go:61] "amd-gpu-device-plugin-jh557" [5db58f7c-939d-4f8a-ad56-5e623bd97274] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:20:45.334425    8330 system_pods.go:61] "coredns-66bc5c9577-2lxh5" [f4a50ee5-9d06-48e9-aeec-8e8fedfd92b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:20:45.334435    8330 system_pods.go:61] "coredns-66bc5c9577-kjfp7" [70196c9f-e851-4e0a-9bad-67ee23312de9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:20:45.334444    8330 system_pods.go:61] "csi-hostpath-attacher-0" [b9fd31a0-37e1-4eec-a97f-a060c1a18bea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:20:45.334456    8330 system_pods.go:61] "csi-hostpath-resizer-0" [638e6c12-0662-47eb-8929-2e5ad0475f5e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:20:45.334471    8330 system_pods.go:61] "csi-hostpathplugin-zrj57" [69f029db-1f0a-43b2-9640-cbdc71a7e26d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:20:45.334480    8330 system_pods.go:61] "etcd-addons-911532" [2ce145a3-4923-438d-b404-82561b587638] Running
	I0929 10:20:45.334486    8330 system_pods.go:61] "kube-apiserver-addons-911532" [a51ab0b2-0bff-45cd-be40-63eda67672a3] Running
	I0929 10:20:45.334491    8330 system_pods.go:61] "kube-controller-manager-addons-911532" [17397601-4bd1-4692-8e05-335fc4806674] Running
	I0929 10:20:45.334500    8330 system_pods.go:61] "kube-ingress-dns-minikube" [3a756c7b-7c15-49df-8410-36c37bdf4785] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:20:45.334505    8330 system_pods.go:61] "kube-proxy-zhcch" [abca3b04-811d-4342-831f-4568c9eb2ee7] Running
	I0929 10:20:45.334513    8330 system_pods.go:61] "kube-scheduler-addons-911532" [4d96f119-c772-497f-a863-d6357e0e0e44] Running
	I0929 10:20:45.334517    8330 system_pods.go:61] "metrics-server-85b7d694d7-c25dl" [6e7da679-c6f1-46e2-9b63-41ed0241a079] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:20:45.334528    8330 system_pods.go:61] "nvidia-device-plugin-daemonset-f6jdr" [4ec65e75-eb10-4514-befa-234528f55822] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:20:45.334537    8330 system_pods.go:61] "registry-66898fdd98-jqjcd" [0c88f6a7-9d7a-40eb-a93a-59bc1e285db9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:20:45.334549    8330 system_pods.go:61] "registry-creds-764b6fb674-xbt6z" [0c2222bf-5153-4d50-b96c-0a6faff0930f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:20:45.334559    8330 system_pods.go:61] "registry-proxy-2jwvb" [79fc320c-8be7-4196-9a5d-2c15ae47e503] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:20:45.334565    8330 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bx82z" [9010bb12-b7f9-43a6-85cc-4ea055c57a89] Pending
	I0929 10:20:45.334571    8330 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ldkqf" [b56211c7-445f-47bc-979d-e6fb7ecca920] Pending
	I0929 10:20:45.334578    8330 system_pods.go:61] "storage-provisioner" [03841ce7-2069-4447-8adf-81b1e5233916] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:20:45.334589    8330 system_pods.go:74] duration metric: took 72.532335ms to wait for pod list to return data ...
	I0929 10:20:45.334601    8330 default_sa.go:34] waiting for default service account to be created ...
	I0929 10:20:45.386874    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:20:45.438919    8330 default_sa.go:45] found service account: "default"
	I0929 10:20:45.438959    8330 default_sa.go:55] duration metric: took 104.351561ms for default service account to be created ...
	I0929 10:20:45.438970    8330 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 10:20:45.479205    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:45.479375    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:45.504498    8330 system_pods.go:86] 20 kube-system pods found
	I0929 10:20:45.504542    8330 system_pods.go:89] "amd-gpu-device-plugin-jh557" [5db58f7c-939d-4f8a-ad56-5e623bd97274] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:20:45.504556    8330 system_pods.go:89] "coredns-66bc5c9577-2lxh5" [f4a50ee5-9d06-48e9-aeec-8e8fedfd92b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:20:45.504572    8330 system_pods.go:89] "coredns-66bc5c9577-kjfp7" [70196c9f-e851-4e0a-9bad-67ee23312de9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:20:45.504584    8330 system_pods.go:89] "csi-hostpath-attacher-0" [b9fd31a0-37e1-4eec-a97f-a060c1a18bea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:20:45.504598    8330 system_pods.go:89] "csi-hostpath-resizer-0" [638e6c12-0662-47eb-8929-2e5ad0475f5e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:20:45.504609    8330 system_pods.go:89] "csi-hostpathplugin-zrj57" [69f029db-1f0a-43b2-9640-cbdc71a7e26d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:20:45.504620    8330 system_pods.go:89] "etcd-addons-911532" [2ce145a3-4923-438d-b404-82561b587638] Running
	I0929 10:20:45.504627    8330 system_pods.go:89] "kube-apiserver-addons-911532" [a51ab0b2-0bff-45cd-be40-63eda67672a3] Running
	I0929 10:20:45.504638    8330 system_pods.go:89] "kube-controller-manager-addons-911532" [17397601-4bd1-4692-8e05-335fc4806674] Running
	I0929 10:20:45.504647    8330 system_pods.go:89] "kube-ingress-dns-minikube" [3a756c7b-7c15-49df-8410-36c37bdf4785] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:20:45.504655    8330 system_pods.go:89] "kube-proxy-zhcch" [abca3b04-811d-4342-831f-4568c9eb2ee7] Running
	I0929 10:20:45.504662    8330 system_pods.go:89] "kube-scheduler-addons-911532" [4d96f119-c772-497f-a863-d6357e0e0e44] Running
	I0929 10:20:45.504674    8330 system_pods.go:89] "metrics-server-85b7d694d7-c25dl" [6e7da679-c6f1-46e2-9b63-41ed0241a079] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:20:45.504685    8330 system_pods.go:89] "nvidia-device-plugin-daemonset-f6jdr" [4ec65e75-eb10-4514-befa-234528f55822] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:20:45.504698    8330 system_pods.go:89] "registry-66898fdd98-jqjcd" [0c88f6a7-9d7a-40eb-a93a-59bc1e285db9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:20:45.504712    8330 system_pods.go:89] "registry-creds-764b6fb674-xbt6z" [0c2222bf-5153-4d50-b96c-0a6faff0930f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:20:45.504724    8330 system_pods.go:89] "registry-proxy-2jwvb" [79fc320c-8be7-4196-9a5d-2c15ae47e503] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:20:45.504734    8330 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bx82z" [9010bb12-b7f9-43a6-85cc-4ea055c57a89] Pending
	I0929 10:20:45.504746    8330 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ldkqf" [b56211c7-445f-47bc-979d-e6fb7ecca920] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:20:45.504759    8330 system_pods.go:89] "storage-provisioner" [03841ce7-2069-4447-8adf-81b1e5233916] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:20:45.504773    8330 system_pods.go:126] duration metric: took 65.795363ms to wait for k8s-apps to be running ...
	I0929 10:20:45.504787    8330 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 10:20:45.504845    8330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:20:45.714542    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:45.928522    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:45.929140    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:46.136638    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.615124231s)
	W0929 10:20:46.136687    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:46.136709    8330 retry.go:31] will retry after 424.774106ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:46.136723    8330 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.605137457s)
	I0929 10:20:46.138626    8330 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:20:46.139865    8330 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 10:20:46.140982    8330 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 10:20:46.141003    8330 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 10:20:46.207677    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:46.212782    8330 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 10:20:46.212807    8330 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 10:20:46.366549    8330 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:20:46.366571    8330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 10:20:46.428820    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:46.428931    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:46.438908    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:20:46.561803    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:46.711871    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:46.927480    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:46.927570    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:47.210898    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:47.425645    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:47.426862    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:47.619932    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.233004041s)
	I0929 10:20:47.619964    8330 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.115094401s)
	I0929 10:20:47.619993    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:47.620010    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:47.620013    8330 system_svc.go:56] duration metric: took 2.115222945s WaitForService to wait for kubelet
	I0929 10:20:47.620026    8330 kubeadm.go:578] duration metric: took 13.041192565s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:20:47.620054    8330 node_conditions.go:102] verifying NodePressure condition ...
	I0929 10:20:47.620300    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:47.620344    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:47.620369    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:47.620383    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:47.620401    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:47.620637    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:47.620655    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:47.627713    8330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0929 10:20:47.627742    8330 node_conditions.go:123] node cpu capacity is 2
	I0929 10:20:47.627760    8330 node_conditions.go:105] duration metric: took 7.699657ms to run NodePressure ...
	I0929 10:20:47.627774    8330 start.go:241] waiting for startup goroutines ...
	I0929 10:20:47.711789    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:47.936879    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:47.936886    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:48.243761    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:48.409409    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.970463476s)
	I0929 10:20:48.409454    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:48.409465    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:48.409848    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:48.409869    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:48.409871    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:20:48.409880    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:20:48.409889    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:20:48.410156    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:20:48.410172    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:20:48.411269    8330 addons.go:479] Verifying addon gcp-auth=true in "addons-911532"
	I0929 10:20:48.412822    8330 out.go:179] * Verifying gcp-auth addon...
	I0929 10:20:48.415066    8330 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 10:20:48.435583    8330 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 10:20:48.435609    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:48.444290    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:48.444495    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:48.711086    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:48.926706    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:48.926805    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:48.928639    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:49.215777    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:49.345459    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.783617228s)
	W0929 10:20:49.345502    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:49.345521    8330 retry.go:31] will retry after 771.396332ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:49.427174    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:49.427499    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:49.430561    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:49.718587    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:49.920192    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:49.923406    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:49.929629    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:50.117584    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:50.213086    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:50.424674    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:50.428302    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:50.428402    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:50.711184    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:50.920140    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:50.925731    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:50.928955    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:51.148250    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.030628865s)
	W0929 10:20:51.148302    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:51.148324    8330 retry.go:31] will retry after 576.274213ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:51.211066    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:51.423094    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:51.427282    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:51.429044    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:51.713135    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:51.725183    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:51.924229    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:51.924401    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:51.930896    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:52.209703    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:52.421865    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:52.425402    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:52.428630    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:52.716412    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:52.924295    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:52.930265    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:52.930335    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:52.936143    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.210924841s)
	W0929 10:20:52.936185    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:52.936205    8330 retry.go:31] will retry after 1.374220476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:53.207601    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:53.421623    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:53.424423    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:53.425168    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:53.716959    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:53.924543    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:53.924591    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:53.924737    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:54.206885    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:54.311018    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:54.419619    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:54.424155    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:54.425928    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:54.711437    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:54.921635    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:54.923109    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:54.923875    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:55.207886    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:55.357008    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.045956607s)
	W0929 10:20:55.357041    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:55.357056    8330 retry.go:31] will retry after 2.584738248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:55.419277    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:55.423271    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:55.425958    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:55.771885    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:55.922759    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:55.925311    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:55.926888    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:56.286209    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:56.421963    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:56.425255    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:56.427805    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:56.711210    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:56.919760    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:56.923081    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:56.925860    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:57.208042    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:57.421946    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:57.425265    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:57.425867    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:57.707061    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:57.929800    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:57.930205    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:57.931973    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:57.942181    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:58.207102    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:58.423712    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:58.423755    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:58.427125    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:58.715894    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:58.918954    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:58.921183    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:58.923721    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:59.059080    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.116858718s)
	W0929 10:20:59.059141    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:59.059166    8330 retry.go:31] will retry after 1.942151479s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:59.209232    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:59.417948    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:59.429985    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:59.430010    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:00.130362    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:00.130976    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:00.132182    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:00.132787    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:00.228828    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:00.419020    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:00.421809    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:00.424680    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:00.709229    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:00.927517    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:00.928518    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:00.928523    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:01.001724    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:01.208275    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:01.419888    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:01.428910    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:01.429180    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:01.708863    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:01.920044    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:01.923338    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:01.926834    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 10:21:01.985595    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:01.985631    8330 retry.go:31] will retry after 3.874793998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:02.207338    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:02.419005    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:02.423832    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:02.425188    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:02.710318    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:02.919221    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:02.922831    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:02.925818    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:03.211916    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:03.421799    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:03.423873    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:03.425858    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:03.707940    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:03.918761    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:03.924771    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:03.925496    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:04.208373    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:04.427530    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:04.427562    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:04.429185    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:04.711395    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:04.918946    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:04.922890    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:04.925419    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:05.207717    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:05.425588    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:05.426139    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:05.428064    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:05.709966    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:05.861215    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:05.919835    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:05.925204    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:05.925220    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:06.512873    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:06.512876    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:06.512941    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:06.513032    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:06.712945    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:06.919940    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:06.927065    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:06.928484    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:07.092306    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.231046214s)
	W0929 10:21:07.092346    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:07.092387    8330 retry.go:31] will retry after 5.851261749s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:07.210508    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:07.421136    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:07.424149    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:07.424367    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:07.709771    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:07.920164    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:07.925061    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:07.928279    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:08.220428    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:08.419698    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:08.423421    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:08.427645    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:08.714820    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:08.919380    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:08.924174    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:08.926180    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:09.210300    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:09.418857    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:09.422339    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:09.423046    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:09.711312    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:09.920056    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:09.925490    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:09.925515    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:10.207095    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:10.425993    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:10.426301    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:10.426888    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:10.708041    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:10.921163    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:10.923488    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:10.925261    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:11.211024    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:11.422876    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:11.426400    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:11.428603    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:11.709665    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:11.919412    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:11.925463    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:11.929002    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:12.209928    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:12.420018    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:12.424532    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:12.425138    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:12.710157    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:12.920343    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:12.925416    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:12.926144    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:12.944295    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:13.208230    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:13.420309    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:13.424729    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:13.425970    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:13.710892    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:13.844128    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:13.844162    8330 retry.go:31] will retry after 11.364763944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:13.918763    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:13.922860    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:13.923485    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:14.206401    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:14.418165    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:14.425970    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:14.426096    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:14.933764    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:14.937462    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:14.937474    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:14.937812    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:15.208057    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:15.418646    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:15.425269    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:15.425769    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:15.993595    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:15.997320    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:15.997530    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:15.997548    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:16.206772    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:16.422583    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:16.424335    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:16.426227    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:16.708097    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:16.921247    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:16.923984    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:16.925900    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:17.210604    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:17.419727    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:17.428991    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:17.429113    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:17.713728    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:17.929841    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:17.930573    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:17.933149    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:18.208428    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:18.420222    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:18.424398    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:18.424564    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:18.711774    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:18.918936    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:18.922240    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:18.923709    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:19.207800    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:19.419045    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:19.422805    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:19.422969    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:19.705451    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:19.918694    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:19.923618    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:19.924430    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:20.207194    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:20.424041    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:20.432156    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:20.434202    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:20.713518    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:20.921792    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:20.927184    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:20.927815    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:21.207457    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:21.418704    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:21.422991    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:21.425131    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:21.708372    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:21.924974    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:21.925102    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:21.925333    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:22.208676    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:22.418579    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:22.422645    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:22.424686    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:22.709484    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:22.926015    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:22.927557    8330 kapi.go:107] duration metric: took 39.008871236s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 10:21:22.929226    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:23.209576    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:23.425205    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:23.428082    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:23.714593    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:23.920363    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:23.924951    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:24.207552    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:24.420112    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:24.424479    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:24.707639    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:24.922839    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:24.923981    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:25.209524    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:25.391829    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:25.419769    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:25.423811    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:25.709920    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:25.919838    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:25.922426    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:26.207779    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:26.300301    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.090742353s)
	W0929 10:21:26.300347    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:26.300372    8330 retry.go:31] will retry after 12.261050049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:26.418609    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:26.425516    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:26.709030    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:26.920490    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:26.923303    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:27.210832    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:27.419571    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:27.423843    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:27.717343    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:27.920068    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:27.929499    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:28.213205    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:28.420745    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:28.425514    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:28.715069    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:28.919315    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:28.924075    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:29.209126    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:29.418285    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:29.425171    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:29.722341    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:29.919736    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:29.924941    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:30.207130    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:30.421800    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:30.422894    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:30.712262    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:30.919477    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:30.922148    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:31.208448    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:31.418793    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:31.422244    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:31.711448    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:31.921287    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:31.923795    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:32.209904    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:32.419914    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:32.422336    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:32.711037    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:32.920967    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:32.928515    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:33.207431    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:33.419316    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:33.422381    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:33.709295    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:33.924149    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:33.928383    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:34.208000    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:34.428340    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:34.431876    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:34.709426    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:34.920188    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:34.924270    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:35.207181    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:35.418439    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:35.423100    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:35.707578    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:35.937088    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:35.939327    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:36.208907    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:36.420989    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:36.423616    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:36.708309    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:36.919632    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:36.924273    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:37.207435    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:37.419671    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:37.423102    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:37.783791    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:37.919989    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:37.924314    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:38.210022    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:38.420054    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:38.431837    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:38.562020    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:38.713780    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:38.923654    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:39.097166    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:39.208499    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:39.429072    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:39.429738    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:39.711870    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:39.726897    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.164840561s)
	W0929 10:21:39.726947    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:39.726967    8330 retry.go:31] will retry after 11.307676359s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:39.923119    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:39.930020    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:40.210041    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:40.420416    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:40.423961    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:40.709983    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:40.918532    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:40.921906    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:41.211550    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:41.419901    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:41.421841    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:41.710969    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:41.918815    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:41.923114    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:42.210789    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:42.421257    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:42.423834    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:42.711332    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:42.919390    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:42.923203    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:43.209065    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:43.418434    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:43.425216    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:43.710063    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:43.917640    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:43.922545    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:44.205527    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:44.418369    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:44.422405    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:44.712591    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:44.925166    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:44.926743    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:45.214074    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:45.418599    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:45.422428    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:45.713883    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:45.920464    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:45.923397    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:46.207761    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:46.424770    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:46.430331    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:46.708102    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:46.928807    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:46.930451    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:47.205481    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:47.418566    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:47.425398    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:47.713263    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:47.919750    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:47.923524    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:48.206758    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:48.419899    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:48.421913    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:48.711173    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:48.923285    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:48.923314    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:49.208056    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:49.419528    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:49.423287    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:49.711515    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:49.924180    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:49.925537    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:50.212106    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:50.419682    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:50.423313    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:50.716590    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:50.919524    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:50.922669    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:51.034797    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:51.209977    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:51.418761    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:51.424479    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:51.712918    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:51.923780    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:51.926533    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:52.208987    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:52.265550    8330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.230718165s)
	W0929 10:21:52.265592    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:52.265613    8330 retry.go:31] will retry after 29.631524393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:52.428241    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:52.428344    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:52.749549    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:52.921742    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:52.928462    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:53.207817    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:53.419516    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:53.423773    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:53.711799    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:53.920857    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:53.925608    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:54.206121    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:54.419654    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:54.424065    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:54.715431    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:54.920151    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:54.925741    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:55.212980    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:55.419636    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:55.423024    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:55.713534    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:55.925668    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:55.934020    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:56.245122    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:56.419044    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:56.422805    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:56.708253    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:56.922688    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:56.922921    8330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:57.212695    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:57.430279    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:57.435265    8330 kapi.go:107] duration metric: took 1m13.516044822s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 10:21:57.708402    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:57.924317    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:58.210469    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:58.418928    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:58.712217    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:58.918879    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:59.210802    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:59.421325    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:59.707536    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:59.923138    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:00.208005    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:00.419250    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:00.708379    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:00.918693    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:01.206545    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:01.418717    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:01.707897    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:01.924458    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:02.205991    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:02.419531    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:02.707091    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:02.918959    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:03.207504    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:03.419459    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:03.707093    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:03.919081    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:04.207001    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:04.418468    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:04.707785    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:04.918993    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:05.207795    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:05.418672    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:05.706790    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:05.920088    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:06.207438    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:06.418671    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:06.705954    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:06.919275    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:07.206855    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:07.418730    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:07.706264    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:07.918117    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:08.206783    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:08.426939    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:08.710678    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:08.918698    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:09.206327    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:09.418553    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:09.707129    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:09.918195    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:10.207272    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:10.418565    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:10.707124    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:10.919764    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:11.206241    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:11.418797    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:11.706944    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:11.919689    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:12.207328    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:12.418983    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:12.706788    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:12.919311    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:13.206761    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:13.419370    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:13.712805    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:13.919513    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:14.206504    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:14.418758    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:14.706621    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:14.918962    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:15.207334    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:15.419169    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:15.708290    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:15.918738    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:16.206832    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:16.419219    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:16.707913    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:16.919338    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:17.207062    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:17.418184    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:17.707167    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:17.918891    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:18.207006    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:18.418163    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:18.707075    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:18.919925    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:19.206556    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:19.418550    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:19.713091    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:19.920930    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:20.213277    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:20.421532    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:22:20.714653    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:20.919900    8330 kapi.go:107] duration metric: took 1m32.50483081s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 10:22:20.922981    8330 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-911532 cluster.
	I0929 10:22:20.924653    8330 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 10:22:20.926061    8330 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 10:22:21.207013    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:21.714545    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:21.897772    8330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:22:22.206398    8330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:22:22.599960    8330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:22:22.600034    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:22:22.600048    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:22:22.600335    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:22:22.600369    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 10:22:22.600380    8330 main.go:141] libmachine: Making call to close driver server
	I0929 10:22:22.600381    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:22:22.600387    8330 main.go:141] libmachine: (addons-911532) Calling .Close
	I0929 10:22:22.600626    8330 main.go:141] libmachine: (addons-911532) DBG | Closing plugin on server side
	I0929 10:22:22.600645    8330 main.go:141] libmachine: Successfully made call to close driver server
	I0929 10:22:22.600652    8330 main.go:141] libmachine: Making call to close connection to plugin binary
	W0929 10:22:22.600742    8330 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 10:22:22.710659    8330 kapi.go:107] duration metric: took 1m37.508081362s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 10:22:22.712652    8330 out.go:179] * Enabled addons: amd-gpu-device-plugin, ingress-dns, default-storageclass, cloud-spanner, storage-provisioner, nvidia-device-plugin, registry-creds, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0929 10:22:22.713925    8330 addons.go:514] duration metric: took 1m48.135056911s for enable addons: enabled=[amd-gpu-device-plugin ingress-dns default-storageclass cloud-spanner storage-provisioner nvidia-device-plugin registry-creds storage-provisioner-rancher metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0929 10:22:22.713972    8330 start.go:246] waiting for cluster config update ...
	I0929 10:22:22.713998    8330 start.go:255] writing updated cluster config ...
	I0929 10:22:22.714320    8330 ssh_runner.go:195] Run: rm -f paused
	I0929 10:22:22.723573    8330 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:22:22.726685    8330 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2lxh5" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:22.731909    8330 pod_ready.go:94] pod "coredns-66bc5c9577-2lxh5" is "Ready"
	I0929 10:22:22.731936    8330 pod_ready.go:86] duration metric: took 5.225628ms for pod "coredns-66bc5c9577-2lxh5" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:22.733644    8330 pod_ready.go:83] waiting for pod "etcd-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:22.738810    8330 pod_ready.go:94] pod "etcd-addons-911532" is "Ready"
	I0929 10:22:22.738834    8330 pod_ready.go:86] duration metric: took 5.173944ms for pod "etcd-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:22.741797    8330 pod_ready.go:83] waiting for pod "kube-apiserver-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:22.754573    8330 pod_ready.go:94] pod "kube-apiserver-addons-911532" is "Ready"
	I0929 10:22:22.754598    8330 pod_ready.go:86] duration metric: took 12.780428ms for pod "kube-apiserver-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:22.758796    8330 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:23.128329    8330 pod_ready.go:94] pod "kube-controller-manager-addons-911532" is "Ready"
	I0929 10:22:23.128371    8330 pod_ready.go:86] duration metric: took 369.549352ms for pod "kube-controller-manager-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:23.328006    8330 pod_ready.go:83] waiting for pod "kube-proxy-zhcch" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:23.728722    8330 pod_ready.go:94] pod "kube-proxy-zhcch" is "Ready"
	I0929 10:22:23.728750    8330 pod_ready.go:86] duration metric: took 400.712378ms for pod "kube-proxy-zhcch" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:23.928748    8330 pod_ready.go:83] waiting for pod "kube-scheduler-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:24.327749    8330 pod_ready.go:94] pod "kube-scheduler-addons-911532" is "Ready"
	I0929 10:22:24.327772    8330 pod_ready.go:86] duration metric: took 399.002764ms for pod "kube-scheduler-addons-911532" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:24.327782    8330 pod_ready.go:40] duration metric: took 1.604186731s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:22:24.369933    8330 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 10:22:24.371860    8330 out.go:179] * Done! kubectl is now configured to use "addons-911532" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 10:27:43 addons-911532 crio[817]: time="2025-09-29 10:27:43.916719966Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759141663916693588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508783,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8699a920-1c01-4365-9579-bac12997ed94 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:27:43 addons-911532 crio[817]: time="2025-09-29 10:27:43.917467020Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=179035ad-77f6-461c-82c4-4458d79b6a9d name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:27:43 addons-911532 crio[817]: time="2025-09-29 10:27:43.917649866Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=179035ad-77f6-461c-82c4-4458d79b6a9d name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:27:43 addons-911532 crio[817]: time="2025-09-29 10:27:43.918480965Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd2da61f9111a8f172a910334b72c950aad9cf7fcf0d041300bde9676dc9c4b5,PodSandboxId:760f3f111a462fe45783435331c2e5be1da2a299dca6f398620a88efd67623a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759141346666364450,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 50aa0ab4-8b35-4c2d-a178-4efae92e01df,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86299903225c275c16ba4ee1d779f033ab987579d4ac6422c19f2fd060e8a726,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759141341619672301,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d23b4a0ef79c2464e404d975c0d87785de3d7af5c843a051389a716ddc67865,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759141318540861603,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31c1763f6da5357250e1228bab85cc3d750958f66a3a5b7fd832b25bb0ff81c,PodSandboxId:03bb444700e14c181119a621393f5798c192136c811b6f3386b4b5152713ae09,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759141316748980793,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-vttt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2aad62c9-1c19-48f5-8b3c-05a46b75e030,},Annotations:map[string]s
tring{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7dbc3a7ea7e456bf87d8426e18bc6eb1ad812d9efe8200d57fbf61c73a4d171e,PodSandboxId:6c52aed8c7fa63e3ca1db928ef45fc317c5c67533ca3212d1a21f5869230c6fb,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef
218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759141312626930590,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xljfq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c6e265ac-ca21-4ddc-9600-9f5c7a60fe39,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af76a866d9f7161cb48ae968bea8d7c06363958b0000b7c8b685193619ae39f8,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153ae
dc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759141309208153597,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ca93f1803439bb8d7c0ee31afbb42e13ee5031c7de1fabe02a09494bae80ad5,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759141308069570915,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da4833f4415db4921306465d2fb4f126ca430c3d18c4a89eaa8f20e786ba8bb,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-r
egistrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759141306414514921,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:988aa6a5e8a50931ef09ec1acd19e3ac911593b645c53bf4003da182b1674dae,PodSandboxId:4e8a339701c1f8aa4201a090399d4b949ead09ce62cee98adb8df3a0e096602a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image
:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759141304799810576,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-bx82z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9010bb12-b7f9-43a6-85cc-4ea055c57a89,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b80e3a78fd38f1f51a6eefd8c4513909edb9b1053d3efaaee1ac3da4185108ae,PodSandboxId:a8bffbd0b48947ff0ac98962f5c658510cb0728c5e1fbf86c2847acb0688fbe6,Met
adata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759141304667906054,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-ldkqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56211c7-445f-47bc-979d-e6fb7ecca920,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1184f2460f2693ea6f8a8cec74a31ec4b065b23d8b9efdcaf7d9eaca4bf56b99,PodSand
boxId:26d005e1ee4992562de8fb92648009c0498759026fcf684e17b020f2022f85a0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759141302712950200,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8bg4m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 67e735e2-cc42-4d83-8149-dff4c064e226,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd5b676567c158363a1ee8f2bc3d6f9fa
a37e1e1c5769465c497759421eb837,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759141302581475900,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5d31e488abc87f72cee5c4a8e47a04bc935ae66848e742542705ec4ec98f5a,PodSandboxId:580026dcf573a1a642de0bba5f6189c52a03840599ea1cd5c05bc56a2842f167,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759141301215828552,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638e6c12-0662-47eb-8929-2e5ad0475f5e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524ce5f57761b95f68bef1a66bd35da700f6d7866c3217ac224b7711c93a6513,PodSandboxId:40500d85e8ee6bf1057285eeaa0ed2210f174216460e4d2049f944936f3d9504,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759141299428402112,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9fd31a0-37e1-4eec-a97f-a060c1a18bea,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b005c442863d28f3577638a73ce719b94fdd6297c41858388fb0df156658316,PodSandboxId:cc56ace72012aca97185b25865b2591244a49f36780949182f799c361868b188,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759141290662515733,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-dg7kz,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 09b47213-eb81-4881-9a9f-900dd5a99739,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65010026ccf4779ffbbf5a0d1b948ad224d2a7e064b4ef90af3448ede06a9ff,PodSandboxId:c415564a01e1fab92da8edae2e8824202bc486f37754027ab09d33eedd155c44,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759141286789793950,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-tp4c9,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: b33b4eee-87ed-427c-97fe-684dc1a39dc1,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler:
{\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb1fb889a566b019d028c434fcd1b749993ad201323e79f97aab274dfc347ce,PodSandboxId:6a9b5cb08e2bc5e57d63c8c6db0268901431aa3da3ac3e7f79e5bf4d64c54062,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759141279263661342,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a756c7b-
7c15-49df-8410-36c37bdf4785,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b6f4ec2f78e909b787cbcfadf86a5962d851f2159dd1536bc864bb4c146942a,PodSandboxId:a5ffe00771c3b3619e024c11d22b51c4f3587f4c5bde7d6222f7c2b905b30476,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759141244627788814,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device
-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-jh557,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db58f7c-939d-4f8a-ad56-5e623bd97274,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8590713c2981f7e21a94ebe7a67b99f6cd9fe7a5b1d1e09f228f4b011567a991,PodSandboxId:38c60c0820a0d6aff995e82d2cefab3191781caeb135c427d83d8b51d8fd6bc8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759141243642108766,Labels:map[string]string{io.kubernetes.container.name: storage-provisio
ner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03841ce7-2069-4447-8adf-81b1e5233916,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c5c0be5e893e6cb715346a881e803fa92dd601e9a2829b7d1f07ac26f7787a,PodSandboxId:b478e3ec972282315c8ae9a1f15a19686b00bad35c1fddad651c6936db1c8618,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759141235709566509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
66bc5c9577-2lxh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a50ee5-9d06-48e9-aeec-8e8fedfd92b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175a117fb6f06a3a250e33b7087fba88b740cfdf629e237f60ae0464b9de4eab,PodSandboxId:0d650e4b5f405a8659aec95c9a511629a431c4a60df6ab8393ac1713b86a6959,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674
df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759141235212839995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zhcch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abca3b04-811d-4342-831f-4568c9eb2ee7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a50327ef6012889c1d102209d8e88d4379ab8db2ce573d6b836416420edd50,PodSandboxId:04eeebd713634e07907eafd3a8303efc398fb4212e3caf61dddeace9c3777bf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864
c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759141222841471087,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edab1ff75c1cd7a0642fffd0b21cd736,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6dbae6113baa53e9504ec93e91af4dc56681d82f26ff33230ebb0ec68e7651,PodSandboxId:f208189bae6ea8042ad1470a0aa5d502dcf417de6417ddc74cbf1d8eb5ea4039,Metadata:&Container
Metadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759141222881788024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb644a85a1a2dd20a9929f14a1844358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7fd02945411862cbbf762bab42e24df4c87a418df8b35995e7dd8be37796636,PodSandboxId:2ab362827e
dd044925fd101b4d222362ad65e480d8d0f8a6f9691ad69dab263e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759141222851945611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f152e69a7a65e5947151db70e65d9f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:a00a42bfe385199d067828289bf42f54827d8c441368629a7bc1f630b335746e,PodSandboxId:4232352893b52fd8c9e6c7c3bbbab8d9a22c6dab5d90a4f5240097504f8391e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759141222827263618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf0001919057aab7c9bba4425845358c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=179035ad-77f6-461c-82c4-4458d79b6a9d name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:27:43 addons-911532 crio[817]: time="2025-09-29 10:27:43.967368517Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f9a5843-dcf8-48a3-abf1-993539c87cc9 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:27:43 addons-911532 crio[817]: time="2025-09-29 10:27:43.967442767Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f9a5843-dcf8-48a3-abf1-993539c87cc9 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:27:43 addons-911532 crio[817]: time="2025-09-29 10:27:43.969478224Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25d07a05-5f69-4728-b23d-7bd4a5b15f0f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:27:43 addons-911532 crio[817]: time="2025-09-29 10:27:43.972079078Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759141663972052354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508783,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25d07a05-5f69-4728-b23d-7bd4a5b15f0f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:27:43 addons-911532 crio[817]: time="2025-09-29 10:27:43.972702015Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7efb282f-0147-4b25-b68e-83257600ed6a name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:27:43 addons-911532 crio[817]: time="2025-09-29 10:27:43.972789732Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7efb282f-0147-4b25-b68e-83257600ed6a name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:27:43 addons-911532 crio[817]: time="2025-09-29 10:27:43.973402042Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd2da61f9111a8f172a910334b72c950aad9cf7fcf0d041300bde9676dc9c4b5,PodSandboxId:760f3f111a462fe45783435331c2e5be1da2a299dca6f398620a88efd67623a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759141346666364450,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 50aa0ab4-8b35-4c2d-a178-4efae92e01df,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86299903225c275c16ba4ee1d779f033ab987579d4ac6422c19f2fd060e8a726,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759141341619672301,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d23b4a0ef79c2464e404d975c0d87785de3d7af5c843a051389a716ddc67865,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759141318540861603,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31c1763f6da5357250e1228bab85cc3d750958f66a3a5b7fd832b25bb0ff81c,PodSandboxId:03bb444700e14c181119a621393f5798c192136c811b6f3386b4b5152713ae09,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759141316748980793,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-vttt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2aad62c9-1c19-48f5-8b3c-05a46b75e030,},Annotations:map[string]s
tring{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7dbc3a7ea7e456bf87d8426e18bc6eb1ad812d9efe8200d57fbf61c73a4d171e,PodSandboxId:6c52aed8c7fa63e3ca1db928ef45fc317c5c67533ca3212d1a21f5869230c6fb,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef
218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759141312626930590,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xljfq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c6e265ac-ca21-4ddc-9600-9f5c7a60fe39,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af76a866d9f7161cb48ae968bea8d7c06363958b0000b7c8b685193619ae39f8,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153ae
dc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759141309208153597,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ca93f1803439bb8d7c0ee31afbb42e13ee5031c7de1fabe02a09494bae80ad5,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759141308069570915,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da4833f4415db4921306465d2fb4f126ca430c3d18c4a89eaa8f20e786ba8bb,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-r
egistrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759141306414514921,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:988aa6a5e8a50931ef09ec1acd19e3ac911593b645c53bf4003da182b1674dae,PodSandboxId:4e8a339701c1f8aa4201a090399d4b949ead09ce62cee98adb8df3a0e096602a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image
:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759141304799810576,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-bx82z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9010bb12-b7f9-43a6-85cc-4ea055c57a89,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b80e3a78fd38f1f51a6eefd8c4513909edb9b1053d3efaaee1ac3da4185108ae,PodSandboxId:a8bffbd0b48947ff0ac98962f5c658510cb0728c5e1fbf86c2847acb0688fbe6,Met
adata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759141304667906054,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-ldkqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56211c7-445f-47bc-979d-e6fb7ecca920,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1184f2460f2693ea6f8a8cec74a31ec4b065b23d8b9efdcaf7d9eaca4bf56b99,PodSand
boxId:26d005e1ee4992562de8fb92648009c0498759026fcf684e17b020f2022f85a0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759141302712950200,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8bg4m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 67e735e2-cc42-4d83-8149-dff4c064e226,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd5b676567c158363a1ee8f2bc3d6f9fa
a37e1e1c5769465c497759421eb837,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759141302581475900,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5d31e488abc87f72cee5c4a8e47a04bc935ae66848e742542705ec4ec98f5a,PodSandboxId:580026dcf573a1a642de0bba5f6189c52a03840599ea1cd5c05bc56a2842f167,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759141301215828552,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638e6c12-0662-47eb-8929-2e5ad0475f5e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524ce5f57761b95f68bef1a66bd35da700f6d7866c3217ac224b7711c93a6513,PodSandboxId:40500d85e8ee6bf1057285eeaa0ed2210f174216460e4d2049f944936f3d9504,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759141299428402112,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9fd31a0-37e1-4eec-a97f-a060c1a18bea,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b005c442863d28f3577638a73ce719b94fdd6297c41858388fb0df156658316,PodSandboxId:cc56ace72012aca97185b25865b2591244a49f36780949182f799c361868b188,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759141290662515733,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-dg7kz,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 09b47213-eb81-4881-9a9f-900dd5a99739,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65010026ccf4779ffbbf5a0d1b948ad224d2a7e064b4ef90af3448ede06a9ff,PodSandboxId:c415564a01e1fab92da8edae2e8824202bc486f37754027ab09d33eedd155c44,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759141286789793950,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-tp4c9,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: b33b4eee-87ed-427c-97fe-684dc1a39dc1,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler:
{\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb1fb889a566b019d028c434fcd1b749993ad201323e79f97aab274dfc347ce,PodSandboxId:6a9b5cb08e2bc5e57d63c8c6db0268901431aa3da3ac3e7f79e5bf4d64c54062,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759141279263661342,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a756c7b-
7c15-49df-8410-36c37bdf4785,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b6f4ec2f78e909b787cbcfadf86a5962d851f2159dd1536bc864bb4c146942a,PodSandboxId:a5ffe00771c3b3619e024c11d22b51c4f3587f4c5bde7d6222f7c2b905b30476,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759141244627788814,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device
-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-jh557,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db58f7c-939d-4f8a-ad56-5e623bd97274,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8590713c2981f7e21a94ebe7a67b99f6cd9fe7a5b1d1e09f228f4b011567a991,PodSandboxId:38c60c0820a0d6aff995e82d2cefab3191781caeb135c427d83d8b51d8fd6bc8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759141243642108766,Labels:map[string]string{io.kubernetes.container.name: storage-provisio
ner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03841ce7-2069-4447-8adf-81b1e5233916,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c5c0be5e893e6cb715346a881e803fa92dd601e9a2829b7d1f07ac26f7787a,PodSandboxId:b478e3ec972282315c8ae9a1f15a19686b00bad35c1fddad651c6936db1c8618,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759141235709566509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
66bc5c9577-2lxh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a50ee5-9d06-48e9-aeec-8e8fedfd92b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175a117fb6f06a3a250e33b7087fba88b740cfdf629e237f60ae0464b9de4eab,PodSandboxId:0d650e4b5f405a8659aec95c9a511629a431c4a60df6ab8393ac1713b86a6959,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674
df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759141235212839995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zhcch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abca3b04-811d-4342-831f-4568c9eb2ee7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a50327ef6012889c1d102209d8e88d4379ab8db2ce573d6b836416420edd50,PodSandboxId:04eeebd713634e07907eafd3a8303efc398fb4212e3caf61dddeace9c3777bf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864
c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759141222841471087,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edab1ff75c1cd7a0642fffd0b21cd736,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6dbae6113baa53e9504ec93e91af4dc56681d82f26ff33230ebb0ec68e7651,PodSandboxId:f208189bae6ea8042ad1470a0aa5d502dcf417de6417ddc74cbf1d8eb5ea4039,Metadata:&Container
Metadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759141222881788024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb644a85a1a2dd20a9929f14a1844358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7fd02945411862cbbf762bab42e24df4c87a418df8b35995e7dd8be37796636,PodSandboxId:2ab362827e
dd044925fd101b4d222362ad65e480d8d0f8a6f9691ad69dab263e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759141222851945611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f152e69a7a65e5947151db70e65d9f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:a00a42bfe385199d067828289bf42f54827d8c441368629a7bc1f630b335746e,PodSandboxId:4232352893b52fd8c9e6c7c3bbbab8d9a22c6dab5d90a4f5240097504f8391e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759141222827263618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf0001919057aab7c9bba4425845358c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7efb282f-0147-4b25-b68e-83257600ed6a name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:27:44 addons-911532 crio[817]: time="2025-09-29 10:27:44.015445379Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=48a9a60f-1419-4016-b325-fd840cdd6404 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:27:44 addons-911532 crio[817]: time="2025-09-29 10:27:44.015530917Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=48a9a60f-1419-4016-b325-fd840cdd6404 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:27:44 addons-911532 crio[817]: time="2025-09-29 10:27:44.017915296Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=08fd0981-be9a-4ece-8240-8f9e4e11d0ec name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:27:44 addons-911532 crio[817]: time="2025-09-29 10:27:44.019822648Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759141664019475968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508783,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=08fd0981-be9a-4ece-8240-8f9e4e11d0ec name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:27:44 addons-911532 crio[817]: time="2025-09-29 10:27:44.020414062Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=686a5cd4-1db4-4968-a9bc-6f684ce02a09 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:27:44 addons-911532 crio[817]: time="2025-09-29 10:27:44.020472935Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=686a5cd4-1db4-4968-a9bc-6f684ce02a09 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:27:44 addons-911532 crio[817]: time="2025-09-29 10:27:44.020974875Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd2da61f9111a8f172a910334b72c950aad9cf7fcf0d041300bde9676dc9c4b5,PodSandboxId:760f3f111a462fe45783435331c2e5be1da2a299dca6f398620a88efd67623a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759141346666364450,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 50aa0ab4-8b35-4c2d-a178-4efae92e01df,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86299903225c275c16ba4ee1d779f033ab987579d4ac6422c19f2fd060e8a726,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759141341619672301,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d23b4a0ef79c2464e404d975c0d87785de3d7af5c843a051389a716ddc67865,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759141318540861603,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31c1763f6da5357250e1228bab85cc3d750958f66a3a5b7fd832b25bb0ff81c,PodSandboxId:03bb444700e14c181119a621393f5798c192136c811b6f3386b4b5152713ae09,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759141316748980793,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-vttt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2aad62c9-1c19-48f5-8b3c-05a46b75e030,},Annotations:map[string]s
tring{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7dbc3a7ea7e456bf87d8426e18bc6eb1ad812d9efe8200d57fbf61c73a4d171e,PodSandboxId:6c52aed8c7fa63e3ca1db928ef45fc317c5c67533ca3212d1a21f5869230c6fb,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef
218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759141312626930590,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xljfq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c6e265ac-ca21-4ddc-9600-9f5c7a60fe39,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af76a866d9f7161cb48ae968bea8d7c06363958b0000b7c8b685193619ae39f8,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153ae
dc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759141309208153597,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ca93f1803439bb8d7c0ee31afbb42e13ee5031c7de1fabe02a09494bae80ad5,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759141308069570915,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da4833f4415db4921306465d2fb4f126ca430c3d18c4a89eaa8f20e786ba8bb,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-r
egistrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759141306414514921,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:988aa6a5e8a50931ef09ec1acd19e3ac911593b645c53bf4003da182b1674dae,PodSandboxId:4e8a339701c1f8aa4201a090399d4b949ead09ce62cee98adb8df3a0e096602a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image
:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759141304799810576,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-bx82z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9010bb12-b7f9-43a6-85cc-4ea055c57a89,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b80e3a78fd38f1f51a6eefd8c4513909edb9b1053d3efaaee1ac3da4185108ae,PodSandboxId:a8bffbd0b48947ff0ac98962f5c658510cb0728c5e1fbf86c2847acb0688fbe6,Met
adata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759141304667906054,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-ldkqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56211c7-445f-47bc-979d-e6fb7ecca920,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1184f2460f2693ea6f8a8cec74a31ec4b065b23d8b9efdcaf7d9eaca4bf56b99,PodSand
boxId:26d005e1ee4992562de8fb92648009c0498759026fcf684e17b020f2022f85a0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759141302712950200,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8bg4m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 67e735e2-cc42-4d83-8149-dff4c064e226,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd5b676567c158363a1ee8f2bc3d6f9fa
a37e1e1c5769465c497759421eb837,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759141302581475900,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5d31e488abc87f72cee5c4a8e47a04bc935ae66848e742542705ec4ec98f5a,PodSandboxId:580026dcf573a1a642de0bba5f6189c52a03840599ea1cd5c05bc56a2842f167,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759141301215828552,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638e6c12-0662-47eb-8929-2e5ad0475f5e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524ce5f57761b95f68bef1a66bd35da700f6d7866c3217ac224b7711c93a6513,PodSandboxId:40500d85e8ee6bf1057285eeaa0ed2210f174216460e4d2049f944936f3d9504,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759141299428402112,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9fd31a0-37e1-4eec-a97f-a060c1a18bea,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b005c442863d28f3577638a73ce719b94fdd6297c41858388fb0df156658316,PodSandboxId:cc56ace72012aca97185b25865b2591244a49f36780949182f799c361868b188,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759141290662515733,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-dg7kz,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 09b47213-eb81-4881-9a9f-900dd5a99739,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65010026ccf4779ffbbf5a0d1b948ad224d2a7e064b4ef90af3448ede06a9ff,PodSandboxId:c415564a01e1fab92da8edae2e8824202bc486f37754027ab09d33eedd155c44,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759141286789793950,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-tp4c9,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: b33b4eee-87ed-427c-97fe-684dc1a39dc1,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler:
{\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb1fb889a566b019d028c434fcd1b749993ad201323e79f97aab274dfc347ce,PodSandboxId:6a9b5cb08e2bc5e57d63c8c6db0268901431aa3da3ac3e7f79e5bf4d64c54062,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759141279263661342,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a756c7b-
7c15-49df-8410-36c37bdf4785,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b6f4ec2f78e909b787cbcfadf86a5962d851f2159dd1536bc864bb4c146942a,PodSandboxId:a5ffe00771c3b3619e024c11d22b51c4f3587f4c5bde7d6222f7c2b905b30476,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759141244627788814,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device
-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-jh557,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db58f7c-939d-4f8a-ad56-5e623bd97274,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8590713c2981f7e21a94ebe7a67b99f6cd9fe7a5b1d1e09f228f4b011567a991,PodSandboxId:38c60c0820a0d6aff995e82d2cefab3191781caeb135c427d83d8b51d8fd6bc8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759141243642108766,Labels:map[string]string{io.kubernetes.container.name: storage-provisio
ner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03841ce7-2069-4447-8adf-81b1e5233916,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c5c0be5e893e6cb715346a881e803fa92dd601e9a2829b7d1f07ac26f7787a,PodSandboxId:b478e3ec972282315c8ae9a1f15a19686b00bad35c1fddad651c6936db1c8618,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759141235709566509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
66bc5c9577-2lxh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a50ee5-9d06-48e9-aeec-8e8fedfd92b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175a117fb6f06a3a250e33b7087fba88b740cfdf629e237f60ae0464b9de4eab,PodSandboxId:0d650e4b5f405a8659aec95c9a511629a431c4a60df6ab8393ac1713b86a6959,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674
df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759141235212839995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zhcch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abca3b04-811d-4342-831f-4568c9eb2ee7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a50327ef6012889c1d102209d8e88d4379ab8db2ce573d6b836416420edd50,PodSandboxId:04eeebd713634e07907eafd3a8303efc398fb4212e3caf61dddeace9c3777bf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864
c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759141222841471087,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edab1ff75c1cd7a0642fffd0b21cd736,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6dbae6113baa53e9504ec93e91af4dc56681d82f26ff33230ebb0ec68e7651,PodSandboxId:f208189bae6ea8042ad1470a0aa5d502dcf417de6417ddc74cbf1d8eb5ea4039,Metadata:&Container
Metadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759141222881788024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb644a85a1a2dd20a9929f14a1844358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7fd02945411862cbbf762bab42e24df4c87a418df8b35995e7dd8be37796636,PodSandboxId:2ab362827e
dd044925fd101b4d222362ad65e480d8d0f8a6f9691ad69dab263e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759141222851945611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f152e69a7a65e5947151db70e65d9f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:a00a42bfe385199d067828289bf42f54827d8c441368629a7bc1f630b335746e,PodSandboxId:4232352893b52fd8c9e6c7c3bbbab8d9a22c6dab5d90a4f5240097504f8391e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759141222827263618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf0001919057aab7c9bba4425845358c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=686a5cd4-1db4-4968-a9bc-6f684ce02a09 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:27:44 addons-911532 crio[817]: time="2025-09-29 10:27:44.069793781Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=30311321-1878-4f78-8bff-fc29fabc6a43 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:27:44 addons-911532 crio[817]: time="2025-09-29 10:27:44.069868725Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30311321-1878-4f78-8bff-fc29fabc6a43 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:27:44 addons-911532 crio[817]: time="2025-09-29 10:27:44.071426672Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=541e6081-8038-47f7-82f8-11cf4e5c8cc0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:27:44 addons-911532 crio[817]: time="2025-09-29 10:27:44.072580602Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759141664072556398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508783,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=541e6081-8038-47f7-82f8-11cf4e5c8cc0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:27:44 addons-911532 crio[817]: time="2025-09-29 10:27:44.073396539Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b60f0d1c-b58c-4c5b-b748-c08d45cc6c85 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:27:44 addons-911532 crio[817]: time="2025-09-29 10:27:44.073525869Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b60f0d1c-b58c-4c5b-b748-c08d45cc6c85 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:27:44 addons-911532 crio[817]: time="2025-09-29 10:27:44.074379063Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd2da61f9111a8f172a910334b72c950aad9cf7fcf0d041300bde9676dc9c4b5,PodSandboxId:760f3f111a462fe45783435331c2e5be1da2a299dca6f398620a88efd67623a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759141346666364450,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 50aa0ab4-8b35-4c2d-a178-4efae92e01df,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86299903225c275c16ba4ee1d779f033ab987579d4ac6422c19f2fd060e8a726,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759141341619672301,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d23b4a0ef79c2464e404d975c0d87785de3d7af5c843a051389a716ddc67865,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759141318540861603,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31c1763f6da5357250e1228bab85cc3d750958f66a3a5b7fd832b25bb0ff81c,PodSandboxId:03bb444700e14c181119a621393f5798c192136c811b6f3386b4b5152713ae09,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759141316748980793,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-vttt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2aad62c9-1c19-48f5-8b3c-05a46b75e030,},Annotations:map[string]s
tring{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7dbc3a7ea7e456bf87d8426e18bc6eb1ad812d9efe8200d57fbf61c73a4d171e,PodSandboxId:6c52aed8c7fa63e3ca1db928ef45fc317c5c67533ca3212d1a21f5869230c6fb,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef
218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759141312626930590,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xljfq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c6e265ac-ca21-4ddc-9600-9f5c7a60fe39,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af76a866d9f7161cb48ae968bea8d7c06363958b0000b7c8b685193619ae39f8,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153ae
dc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759141309208153597,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ca93f1803439bb8d7c0ee31afbb42e13ee5031c7de1fabe02a09494bae80ad5,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759141308069570915,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9da4833f4415db4921306465d2fb4f126ca430c3d18c4a89eaa8f20e786ba8bb,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-r
egistrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759141306414514921,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:988aa6a5e8a50931ef09ec1acd19e3ac911593b645c53bf4003da182b1674dae,PodSandboxId:4e8a339701c1f8aa4201a090399d4b949ead09ce62cee98adb8df3a0e096602a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image
:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759141304799810576,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-bx82z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9010bb12-b7f9-43a6-85cc-4ea055c57a89,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b80e3a78fd38f1f51a6eefd8c4513909edb9b1053d3efaaee1ac3da4185108ae,PodSandboxId:a8bffbd0b48947ff0ac98962f5c658510cb0728c5e1fbf86c2847acb0688fbe6,Met
adata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759141304667906054,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-ldkqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56211c7-445f-47bc-979d-e6fb7ecca920,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1184f2460f2693ea6f8a8cec74a31ec4b065b23d8b9efdcaf7d9eaca4bf56b99,PodSand
boxId:26d005e1ee4992562de8fb92648009c0498759026fcf684e17b020f2022f85a0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759141302712950200,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8bg4m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 67e735e2-cc42-4d83-8149-dff4c064e226,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cd5b676567c158363a1ee8f2bc3d6f9fa
a37e1e1c5769465c497759421eb837,PodSandboxId:caa01a136f6dda1956d49589f54b72099827bda21e73efdfd4aac05099cf6980,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759141302581475900,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-zrj57,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f029db-1f0a-43b2-9640-cbdc71a7e26d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5d31e488abc87f72cee5c4a8e47a04bc935ae66848e742542705ec4ec98f5a,PodSandboxId:580026dcf573a1a642de0bba5f6189c52a03840599ea1cd5c05bc56a2842f167,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759141301215828552,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638e6c12-0662-47eb-8929-2e5ad0475f5e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524ce5f57761b95f68bef1a66bd35da700f6d7866c3217ac224b7711c93a6513,PodSandboxId:40500d85e8ee6bf1057285eeaa0ed2210f174216460e4d2049f944936f3d9504,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759141299428402112,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9fd31a0-37e1-4eec-a97f-a060c1a18bea,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b005c442863d28f3577638a73ce719b94fdd6297c41858388fb0df156658316,PodSandboxId:cc56ace72012aca97185b25865b2591244a49f36780949182f799c361868b188,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759141290662515733,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-dg7kz,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 09b47213-eb81-4881-9a9f-900dd5a99739,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65010026ccf4779ffbbf5a0d1b948ad224d2a7e064b4ef90af3448ede06a9ff,PodSandboxId:c415564a01e1fab92da8edae2e8824202bc486f37754027ab09d33eedd155c44,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759141286789793950,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-tp4c9,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: b33b4eee-87ed-427c-97fe-684dc1a39dc1,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler:
{\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb1fb889a566b019d028c434fcd1b749993ad201323e79f97aab274dfc347ce,PodSandboxId:6a9b5cb08e2bc5e57d63c8c6db0268901431aa3da3ac3e7f79e5bf4d64c54062,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759141279263661342,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a756c7b-
7c15-49df-8410-36c37bdf4785,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b6f4ec2f78e909b787cbcfadf86a5962d851f2159dd1536bc864bb4c146942a,PodSandboxId:a5ffe00771c3b3619e024c11d22b51c4f3587f4c5bde7d6222f7c2b905b30476,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759141244627788814,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device
-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-jh557,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db58f7c-939d-4f8a-ad56-5e623bd97274,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8590713c2981f7e21a94ebe7a67b99f6cd9fe7a5b1d1e09f228f4b011567a991,PodSandboxId:38c60c0820a0d6aff995e82d2cefab3191781caeb135c427d83d8b51d8fd6bc8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759141243642108766,Labels:map[string]string{io.kubernetes.container.name: storage-provisio
ner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03841ce7-2069-4447-8adf-81b1e5233916,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c5c0be5e893e6cb715346a881e803fa92dd601e9a2829b7d1f07ac26f7787a,PodSandboxId:b478e3ec972282315c8ae9a1f15a19686b00bad35c1fddad651c6936db1c8618,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759141235709566509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
66bc5c9577-2lxh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4a50ee5-9d06-48e9-aeec-8e8fedfd92b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175a117fb6f06a3a250e33b7087fba88b740cfdf629e237f60ae0464b9de4eab,PodSandboxId:0d650e4b5f405a8659aec95c9a511629a431c4a60df6ab8393ac1713b86a6959,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674
df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759141235212839995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zhcch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abca3b04-811d-4342-831f-4568c9eb2ee7,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0a50327ef6012889c1d102209d8e88d4379ab8db2ce573d6b836416420edd50,PodSandboxId:04eeebd713634e07907eafd3a8303efc398fb4212e3caf61dddeace9c3777bf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864
c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759141222841471087,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edab1ff75c1cd7a0642fffd0b21cd736,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6dbae6113baa53e9504ec93e91af4dc56681d82f26ff33230ebb0ec68e7651,PodSandboxId:f208189bae6ea8042ad1470a0aa5d502dcf417de6417ddc74cbf1d8eb5ea4039,Metadata:&Container
Metadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759141222881788024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb644a85a1a2dd20a9929f14a1844358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7fd02945411862cbbf762bab42e24df4c87a418df8b35995e7dd8be37796636,PodSandboxId:2ab362827e
dd044925fd101b4d222362ad65e480d8d0f8a6f9691ad69dab263e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759141222851945611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f152e69a7a65e5947151db70e65d9f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:a00a42bfe385199d067828289bf42f54827d8c441368629a7bc1f630b335746e,PodSandboxId:4232352893b52fd8c9e6c7c3bbbab8d9a22c6dab5d90a4f5240097504f8391e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759141222827263618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-911532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf0001919057aab7c9bba4425845358c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b60f0d1c-b58c-4c5b-b748-c08d45cc6c85 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	dd2da61f9111a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          5 minutes ago       Running             busybox                                  0                   760f3f111a462       busybox
	86299903225c2       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          5 minutes ago       Running             csi-snapshotter                          0                   caa01a136f6dd       csi-hostpathplugin-zrj57
	3d23b4a0ef79c       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          5 minutes ago       Running             csi-provisioner                          0                   caa01a136f6dd       csi-hostpathplugin-zrj57
	f31c1763f6da5       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             5 minutes ago       Running             controller                               0                   03bb444700e14       ingress-nginx-controller-9cc49f96f-vttt9
	7dbc3a7ea7e45       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                                             5 minutes ago       Exited              patch                                    2                   6c52aed8c7fa6       ingress-nginx-admission-patch-xljfq
	af76a866d9f71       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            5 minutes ago       Running             liveness-probe                           0                   caa01a136f6dd       csi-hostpathplugin-zrj57
	5ca93f1803439       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           5 minutes ago       Running             hostpath                                 0                   caa01a136f6dd       csi-hostpathplugin-zrj57
	9da4833f4415d       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                5 minutes ago       Running             node-driver-registrar                    0                   caa01a136f6dd       csi-hostpathplugin-zrj57
	988aa6a5e8a50       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      5 minutes ago       Running             volume-snapshot-controller               0                   4e8a339701c1f       snapshot-controller-7d9fbc56b8-bx82z
	b80e3a78fd38f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      5 minutes ago       Running             volume-snapshot-controller               0                   a8bffbd0b4894       snapshot-controller-7d9fbc56b8-ldkqf
	1184f2460f269       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   6 minutes ago       Exited              create                                   0                   26d005e1ee499       ingress-nginx-admission-create-8bg4m
	6cd5b676567c1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   6 minutes ago       Running             csi-external-health-monitor-controller   0                   caa01a136f6dd       csi-hostpathplugin-zrj57
	0f5d31e488abc       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              6 minutes ago       Running             csi-resizer                              0                   580026dcf573a       csi-hostpath-resizer-0
	524ce5f57761b       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             6 minutes ago       Running             csi-attacher                             0                   40500d85e8ee6       csi-hostpath-attacher-0
	6b005c442863d       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             6 minutes ago       Running             local-path-provisioner                   0                   cc56ace72012a       local-path-provisioner-648f6765c9-dg7kz
	d65010026ccf4       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            6 minutes ago       Running             gadget                                   0                   c415564a01e1f       gadget-tp4c9
	efb1fb889a566       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               6 minutes ago       Running             minikube-ingress-dns                     0                   6a9b5cb08e2bc       kube-ingress-dns-minikube
	9b6f4ec2f78e9       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     6 minutes ago       Running             amd-gpu-device-plugin                    0                   a5ffe00771c3b       amd-gpu-device-plugin-jh557
	8590713c2981f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             7 minutes ago       Running             storage-provisioner                      0                   38c60c0820a0d       storage-provisioner
	b6c5c0be5e893       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             7 minutes ago       Running             coredns                                  0                   b478e3ec97228       coredns-66bc5c9577-2lxh5
	175a117fb6f06       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                                             7 minutes ago       Running             kube-proxy                               0                   0d650e4b5f405       kube-proxy-zhcch
	3b6dbae6113ba       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             7 minutes ago       Running             etcd                                     0                   f208189bae6ea       etcd-addons-911532
	a7fd029454118       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                                             7 minutes ago       Running             kube-controller-manager                  0                   2ab362827edd0       kube-controller-manager-addons-911532
	e0a50327ef601       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                                             7 minutes ago       Running             kube-scheduler                           0                   04eeebd713634       kube-scheduler-addons-911532
	a00a42bfe3851       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                                             7 minutes ago       Running             kube-apiserver                           0                   4232352893b52       kube-apiserver-addons-911532
	
	
	==> coredns [b6c5c0be5e893e6cb715346a881e803fa92dd601e9a2829b7d1f07ac26f7787a] <==
	[INFO] 10.244.0.8:50652 - 16984 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000291243s
	[INFO] 10.244.0.8:50652 - 50804 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000151578s
	[INFO] 10.244.0.8:50652 - 20738 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000103041s
	[INFO] 10.244.0.8:50652 - 42178 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000141825s
	[INFO] 10.244.0.8:50652 - 37241 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000104758s
	[INFO] 10.244.0.8:50652 - 56970 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00015054s
	[INFO] 10.244.0.8:50652 - 44050 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000117583s
	[INFO] 10.244.0.8:48716 - 14813 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000130702s
	[INFO] 10.244.0.8:48716 - 15156 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000208908s
	[INFO] 10.244.0.8:37606 - 64555 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000146123s
	[INFO] 10.244.0.8:37606 - 64844 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00012694s
	[INFO] 10.244.0.8:46483 - 39882 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094662s
	[INFO] 10.244.0.8:46483 - 40157 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000344836s
	[INFO] 10.244.0.8:39149 - 27052 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000128832s
	[INFO] 10.244.0.8:39149 - 26844 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000220783s
	[INFO] 10.244.0.23:43438 - 39803 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000622841s
	[INFO] 10.244.0.23:47210 - 22362 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000808655s
	[INFO] 10.244.0.23:54815 - 54620 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000102275s
	[INFO] 10.244.0.23:48706 - 23486 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000290579s
	[INFO] 10.244.0.23:35174 - 37530 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095187s
	[INFO] 10.244.0.23:58302 - 160 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000148316s
	[INFO] 10.244.0.23:60222 - 18112 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001543386s
	[INFO] 10.244.0.23:42303 - 24400 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.005221068s
	[INFO] 10.244.0.27:57662 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000379174s
	[INFO] 10.244.0.27:52524 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000831634s
	
	
	==> describe nodes <==
	Name:               addons-911532
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-911532
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c703192fb7638284bed1945941837d6f5d9e8170
	                    minikube.k8s.io/name=addons-911532
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_20_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-911532
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-911532"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:20:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-911532
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 10:27:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 10:24:02 +0000   Mon, 29 Sep 2025 10:20:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 10:24:02 +0000   Mon, 29 Sep 2025 10:20:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 10:24:02 +0000   Mon, 29 Sep 2025 10:20:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 10:24:02 +0000   Mon, 29 Sep 2025 10:20:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.179
	  Hostname:    addons-911532
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c8a2bbd76874c1a8020738f402773b8
	  System UUID:                0c8a2bbd-7687-4c1a-8020-738f402773b8
	  Boot ID:                    9d51dc84-868d-42de-9a46-75702ae9a571
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  gadget                      gadget-tp4c9                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m2s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-vttt9                      100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         7m1s
	  kube-system                 amd-gpu-device-plugin-jh557                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m7s
	  kube-system                 coredns-66bc5c9577-2lxh5                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m10s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m
	  kube-system                 csi-hostpathplugin-zrj57                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m
	  kube-system                 etcd-addons-911532                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m16s
	  kube-system                 kube-apiserver-addons-911532                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m16s
	  kube-system                 kube-controller-manager-addons-911532                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m16s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m4s
	  kube-system                 kube-proxy-zhcch                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 kube-scheduler-addons-911532                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m16s
	  kube-system                 snapshot-controller-7d9fbc56b8-bx82z                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m59s
	  kube-system                 snapshot-controller-7d9fbc56b8-ldkqf                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m59s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m4s
	  local-path-storage          helper-pod-create-pvc-937c6346-84b7-4f57-ba02-2f7990d0e2d0    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m45s
	  local-path-storage          local-path-provisioner-648f6765c9-dg7kz                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m8s                   kube-proxy       
	  Normal  Starting                 7m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m23s (x8 over 7m23s)  kubelet          Node addons-911532 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m23s (x8 over 7m23s)  kubelet          Node addons-911532 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m23s (x7 over 7m23s)  kubelet          Node addons-911532 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m16s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m16s                  kubelet          Node addons-911532 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m16s                  kubelet          Node addons-911532 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m16s                  kubelet          Node addons-911532 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m15s                  kubelet          Node addons-911532 status is now: NodeReady
	  Normal  RegisteredNode           7m12s                  node-controller  Node addons-911532 event: Registered Node addons-911532 in Controller
	
	
	==> dmesg <==
	[  +0.859526] kauditd_printk_skb: 455 callbacks suppressed
	[Sep29 10:21] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.855059] kauditd_printk_skb: 20 callbacks suppressed
	[  +9.424741] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.533609] kauditd_printk_skb: 26 callbacks suppressed
	[  +8.677200] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.756729] kauditd_printk_skb: 57 callbacks suppressed
	[  +2.380361] kauditd_printk_skb: 115 callbacks suppressed
	[  +4.682193] kauditd_printk_skb: 120 callbacks suppressed
	[  +4.066585] kauditd_printk_skb: 83 callbacks suppressed
	[Sep29 10:22] kauditd_printk_skb: 11 callbacks suppressed
	[ +10.687590] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.000071] kauditd_printk_skb: 26 callbacks suppressed
	[ +12.038379] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000030] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.163052] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.578786] kauditd_printk_skb: 46 callbacks suppressed
	[Sep29 10:23] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.000124] kauditd_printk_skb: 22 callbacks suppressed
	[ +30.032876] kauditd_printk_skb: 26 callbacks suppressed
	[  +2.772049] kauditd_printk_skb: 107 callbacks suppressed
	[Sep29 10:24] kauditd_printk_skb: 54 callbacks suppressed
	[ +50.465336] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.000104] kauditd_printk_skb: 9 callbacks suppressed
	[Sep29 10:25] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [3b6dbae6113baa53e9504ec93e91af4dc56681d82f26ff33230ebb0ec68e7651] <==
	{"level":"warn","ts":"2025-09-29T10:21:33.629920Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.200713ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:21:33.629938Z","caller":"traceutil/trace.go:172","msg":"trace[582963862] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1029; }","duration":"153.249831ms","start":"2025-09-29T10:21:33.476683Z","end":"2025-09-29T10:21:33.629933Z","steps":["trace[582963862] 'agreement among raft nodes before linearized reading'  (duration: 153.191902ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:21:33.632640Z","caller":"traceutil/trace.go:172","msg":"trace[5721142] transaction","detail":"{read_only:false; response_revision:1030; number_of_response:1; }","duration":"196.15975ms","start":"2025-09-29T10:21:33.435470Z","end":"2025-09-29T10:21:33.631629Z","steps":["trace[5721142] 'process raft request'  (duration: 194.644961ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:21:39.088425Z","caller":"traceutil/trace.go:172","msg":"trace[1920545131] linearizableReadLoop","detail":"{readStateIndex:1075; appliedIndex:1075; }","duration":"165.718933ms","start":"2025-09-29T10:21:38.922692Z","end":"2025-09-29T10:21:39.088411Z","steps":["trace[1920545131] 'read index received'  (duration: 165.713078ms)","trace[1920545131] 'applied index is now lower than readState.Index'  (duration: 5.095µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T10:21:39.088595Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.848818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:21:39.088625Z","caller":"traceutil/trace.go:172","msg":"trace[1181994063] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1047; }","duration":"165.92758ms","start":"2025-09-29T10:21:38.922688Z","end":"2025-09-29T10:21:39.088616Z","steps":["trace[1181994063] 'agreement among raft nodes before linearized reading'  (duration: 165.822615ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T10:21:39.089113Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"164.606269ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-xljfq\" limit:1 ","response":"range_response_count:1 size:4722"}
	{"level":"info","ts":"2025-09-29T10:21:39.089162Z","caller":"traceutil/trace.go:172","msg":"trace[517473847] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-xljfq; range_end:; response_count:1; response_revision:1048; }","duration":"164.659529ms","start":"2025-09-29T10:21:38.924494Z","end":"2025-09-29T10:21:39.089153Z","steps":["trace[517473847] 'agreement among raft nodes before linearized reading'  (duration: 164.533832ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:21:39.089291Z","caller":"traceutil/trace.go:172","msg":"trace[103671638] transaction","detail":"{read_only:false; response_revision:1048; number_of_response:1; }","duration":"167.91547ms","start":"2025-09-29T10:21:38.921368Z","end":"2025-09-29T10:21:39.089284Z","steps":["trace[103671638] 'process raft request'  (duration: 167.512399ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:21:41.128032Z","caller":"traceutil/trace.go:172","msg":"trace[1380742237] transaction","detail":"{read_only:false; response_revision:1059; number_of_response:1; }","duration":"160.629944ms","start":"2025-09-29T10:21:40.967387Z","end":"2025-09-29T10:21:41.128017Z","steps":["trace[1380742237] 'process raft request'  (duration: 160.428456ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:21:52.740363Z","caller":"traceutil/trace.go:172","msg":"trace[100017207] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"122.049264ms","start":"2025-09-29T10:21:52.618297Z","end":"2025-09-29T10:21:52.740347Z","steps":["trace[100017207] 'process raft request'  (duration: 121.808982ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:21:56.234316Z","caller":"traceutil/trace.go:172","msg":"trace[1596468790] linearizableReadLoop","detail":"{readStateIndex:1190; appliedIndex:1190; }","duration":"200.26342ms","start":"2025-09-29T10:21:56.034037Z","end":"2025-09-29T10:21:56.234300Z","steps":["trace[1596468790] 'read index received'  (duration: 200.256637ms)","trace[1596468790] 'applied index is now lower than readState.Index'  (duration: 6.184µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T10:21:56.234915Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"200.854605ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:21:56.235162Z","caller":"traceutil/trace.go:172","msg":"trace[794219373] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses; range_end:; response_count:0; response_revision:1159; }","duration":"201.11834ms","start":"2025-09-29T10:21:56.034033Z","end":"2025-09-29T10:21:56.235151Z","steps":["trace[794219373] 'agreement among raft nodes before linearized reading'  (duration: 200.701253ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:21:56.235298Z","caller":"traceutil/trace.go:172","msg":"trace[1282453769] transaction","detail":"{read_only:false; response_revision:1160; number_of_response:1; }","duration":"273.44806ms","start":"2025-09-29T10:21:55.961839Z","end":"2025-09-29T10:21:56.235287Z","steps":["trace[1282453769] 'process raft request'  (duration: 272.570369ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:23:49.922596Z","caller":"traceutil/trace.go:172","msg":"trace[1297543237] transaction","detail":"{read_only:false; response_revision:1563; number_of_response:1; }","duration":"107.889005ms","start":"2025-09-29T10:23:49.814676Z","end":"2025-09-29T10:23:49.922565Z","steps":["trace[1297543237] 'process raft request'  (duration: 107.763843ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:23:56.906428Z","caller":"traceutil/trace.go:172","msg":"trace[852559153] linearizableReadLoop","detail":"{readStateIndex:1673; appliedIndex:1673; }","duration":"207.27017ms","start":"2025-09-29T10:23:56.699140Z","end":"2025-09-29T10:23:56.906410Z","steps":["trace[852559153] 'read index received'  (duration: 207.264352ms)","trace[852559153] 'applied index is now lower than readState.Index'  (duration: 4.799µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T10:23:56.906582Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"207.425338ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:23:56.906604Z","caller":"traceutil/trace.go:172","msg":"trace[159869457] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1610; }","duration":"207.488053ms","start":"2025-09-29T10:23:56.699111Z","end":"2025-09-29T10:23:56.906599Z","steps":["trace[159869457] 'agreement among raft nodes before linearized reading'  (duration: 207.399273ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T10:23:56.906732Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"168.171419ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-937c6346-84b7-4f57-ba02-2f7990d0e2d0\" limit:1 ","response":"range_response_count:1 size:4572"}
	{"level":"info","ts":"2025-09-29T10:23:56.906788Z","caller":"traceutil/trace.go:172","msg":"trace[1828175108] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-937c6346-84b7-4f57-ba02-2f7990d0e2d0; range_end:; response_count:1; response_revision:1611; }","duration":"168.215755ms","start":"2025-09-29T10:23:56.738542Z","end":"2025-09-29T10:23:56.906758Z","steps":["trace[1828175108] 'agreement among raft nodes before linearized reading'  (duration: 168.108786ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:23:56.906872Z","caller":"traceutil/trace.go:172","msg":"trace[928903816] transaction","detail":"{read_only:false; response_revision:1611; number_of_response:1; }","duration":"363.567544ms","start":"2025-09-29T10:23:56.543297Z","end":"2025-09-29T10:23:56.906865Z","steps":["trace[928903816] 'process raft request'  (duration: 363.245361ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T10:23:56.906973Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.243902ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-09-29T10:23:56.906980Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-29T10:23:56.543275Z","time spent":"363.614208ms","remote":"127.0.0.1:49608","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1603 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-09-29T10:23:56.906992Z","caller":"traceutil/trace.go:172","msg":"trace[679122027] range","detail":"{range_begin:/registry/csinodes; range_end:; response_count:0; response_revision:1611; }","duration":"126.265845ms","start":"2025-09-29T10:23:56.780721Z","end":"2025-09-29T10:23:56.906987Z","steps":["trace[679122027] 'agreement among raft nodes before linearized reading'  (duration: 126.228069ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:27:44 up 7 min,  0 users,  load average: 0.49, 0.75, 0.52
	Linux addons-911532 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [a00a42bfe385199d067828289bf42f54827d8c441368629a7bc1f630b335746e] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0929 10:21:30.799774       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.37.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.37.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.37.37:443: connect: connection refused" logger="UnhandledError"
	E0929 10:21:30.804776       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.37.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.37.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.37.37:443: connect: connection refused" logger="UnhandledError"
	E0929 10:21:30.826075       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.37.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.37.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.37.37:443: connect: connection refused" logger="UnhandledError"
	E0929 10:21:30.867726       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.37.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.37.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.37.37:443: connect: connection refused" logger="UnhandledError"
	E0929 10:21:30.949357       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.37.37:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.37.37:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.37.37:443: connect: connection refused" logger="UnhandledError"
	I0929 10:21:31.164100       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0929 10:21:36.538229       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:21:38.492287       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0929 10:22:34.164479       1 conn.go:339] Error on socket receive: read tcp 192.168.39.179:8443->192.168.39.1:36626: use of closed network connection
	E0929 10:22:34.340442       1 conn.go:339] Error on socket receive: read tcp 192.168.39.179:8443->192.168.39.1:36648: use of closed network connection
	I0929 10:22:43.843704       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.159.127"}
	I0929 10:22:47.142596       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:22:56.238986       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:23:31.826473       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 10:24:03.043737       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 10:24:03.224458       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.145.250"}
	I0929 10:24:09.259241       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:24:18.836921       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:25:29.628240       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:25:45.763121       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:26:37.011596       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:26:47.943967       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [a7fd02945411862cbbf762bab42e24df4c87a418df8b35995e7dd8be37796636] <==
	I0929 10:20:33.008605       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 10:20:33.008863       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 10:20:33.010334       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 10:20:33.010394       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 10:20:33.010459       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 10:20:33.011207       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 10:20:33.011280       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 10:20:33.011301       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 10:20:33.011357       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 10:20:33.013406       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:20:33.013533       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 10:20:33.014606       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0929 10:20:33.021630       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	E0929 10:20:41.380381       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E0929 10:21:02.979436       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 10:21:02.979855       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0929 10:21:02.979907       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0929 10:21:03.032661       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0929 10:21:03.042077       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0929 10:21:03.181450       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:21:03.242834       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:21:30.779413       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/metrics-server" err="EndpointSlice informer cache is out of date"
	I0929 10:22:47.640151       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0929 10:24:07.669590       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0929 10:24:11.181949       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	
	
	==> kube-proxy [175a117fb6f06a3a250e33b7087fba88b740cfdf629e237f60ae0464b9de4eab] <==
	I0929 10:20:35.986576       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:20:36.189499       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:20:36.189548       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.179"]
	E0929 10:20:36.189623       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:20:36.301867       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 10:20:36.301934       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 10:20:36.301961       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:20:36.326623       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:20:36.327146       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:20:36.327246       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:20:36.336796       1 config.go:200] "Starting service config controller"
	I0929 10:20:36.336830       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:20:36.336848       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:20:36.336851       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:20:36.336861       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:20:36.336866       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:20:36.342731       1 config.go:309] "Starting node config controller"
	I0929 10:20:36.342767       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:20:36.342774       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:20:36.437304       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:20:36.437613       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:20:36.437632       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e0a50327ef6012889c1d102209d8e88d4379ab8db2ce573d6b836416420edd50] <==
	E0929 10:20:26.063633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:20:26.064483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:20:26.064623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 10:20:26.064815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 10:20:26.065104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 10:20:26.069817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 10:20:26.071395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 10:20:26.072119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 10:20:26.073653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 10:20:26.073850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 10:20:26.074029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 10:20:26.883755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 10:20:26.932747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 10:20:26.936951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 10:20:26.973390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 10:20:26.982912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 10:20:27.004100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 10:20:27.067449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 10:20:27.073035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:20:27.168604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 10:20:27.203313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 10:20:27.256704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 10:20:27.286622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 10:20:27.625245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 10:20:29.547277       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 10:26:39 addons-911532 kubelet[1498]: E0929 10:26:39.115603    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141599115145036  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:26:39 addons-911532 kubelet[1498]: E0929 10:26:39.115761    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141599115145036  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:26:40 addons-911532 kubelet[1498]: E0929 10:26:40.026805    1498 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:26:40 addons-911532 kubelet[1498]: E0929 10:26:40.026879    1498 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:26:40 addons-911532 kubelet[1498]: E0929 10:26:40.027074    1498 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-937c6346-84b7-4f57-ba02-2f7990d0e2d0_local-path-storage(578a5a7c-d138-4bb8-a5f0-099878d77d28): ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 10:26:40 addons-911532 kubelet[1498]: E0929 10:26:40.027121    1498 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-937c6346-84b7-4f57-ba02-2f7990d0e2d0" podUID="578a5a7c-d138-4bb8-a5f0-099878d77d28"
	Sep 29 10:26:40 addons-911532 kubelet[1498]: E0929 10:26:40.720298    1498 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-937c6346-84b7-4f57-ba02-2f7990d0e2d0" podUID="578a5a7c-d138-4bb8-a5f0-099878d77d28"
	Sep 29 10:26:49 addons-911532 kubelet[1498]: E0929 10:26:49.122269    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141609120277029  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:26:49 addons-911532 kubelet[1498]: E0929 10:26:49.122371    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141609120277029  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:26:57 addons-911532 kubelet[1498]: I0929 10:26:57.600012    1498 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-jh557" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 10:26:59 addons-911532 kubelet[1498]: E0929 10:26:59.125327    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141619124978077  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:26:59 addons-911532 kubelet[1498]: E0929 10:26:59.125421    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141619124978077  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:27:09 addons-911532 kubelet[1498]: E0929 10:27:09.127898    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141629127427665  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:27:09 addons-911532 kubelet[1498]: E0929 10:27:09.127922    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141629127427665  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:27:19 addons-911532 kubelet[1498]: E0929 10:27:19.131699    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141639131132348  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:27:19 addons-911532 kubelet[1498]: E0929 10:27:19.132062    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141639131132348  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:27:29 addons-911532 kubelet[1498]: E0929 10:27:29.134484    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141649134222565  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:27:29 addons-911532 kubelet[1498]: E0929 10:27:29.134528    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141649134222565  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:27:39 addons-911532 kubelet[1498]: E0929 10:27:39.137773    1498 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141659137339470  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:27:39 addons-911532 kubelet[1498]: E0929 10:27:39.138079    1498 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141659137339470  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:508783}  inodes_used:{value:181}}"
	Sep 29 10:27:39 addons-911532 kubelet[1498]: I0929 10:27:39.599397    1498 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 10:27:41 addons-911532 kubelet[1498]: E0929 10:27:41.148233    1498 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 10:27:41 addons-911532 kubelet[1498]: E0929 10:27:41.148302    1498 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 10:27:41 addons-911532 kubelet[1498]: E0929 10:27:41.148508    1498 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(c16b0297-3ef5-4961-9f5e-0019acc5ea5f): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 10:27:41 addons-911532 kubelet[1498]: E0929 10:27:41.148545    1498 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c16b0297-3ef5-4961-9f5e-0019acc5ea5f"
	
	
	==> storage-provisioner [8590713c2981f7e21a94ebe7a67b99f6cd9fe7a5b1d1e09f228f4b011567a991] <==
	W0929 10:27:20.034321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:22.037952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:22.047276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:24.050603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:24.057696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:26.061233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:26.067241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:28.071370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:28.078071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:30.084963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:30.089960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:32.094052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:32.103136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:34.107572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:34.114577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:36.118831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:36.124159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:38.127755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:38.132694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:40.137438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:40.145103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:42.148944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:42.154408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:44.160348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:27:44.172368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-911532 -n addons-911532
helpers_test.go:269: (dbg) Run:  kubectl --context addons-911532 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-8bg4m ingress-nginx-admission-patch-xljfq helper-pod-create-pvc-937c6346-84b7-4f57-ba02-2f7990d0e2d0
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-911532 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-8bg4m ingress-nginx-admission-patch-xljfq helper-pod-create-pvc-937c6346-84b7-4f57-ba02-2f7990d0e2d0
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-911532 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-8bg4m ingress-nginx-admission-patch-xljfq helper-pod-create-pvc-937c6346-84b7-4f57-ba02-2f7990d0e2d0: exit status 1 (87.834318ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-911532/192.168.39.179
	Start Time:       Mon, 29 Sep 2025 10:24:03 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j4bxx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j4bxx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  3m42s                 default-scheduler  Successfully assigned default/nginx to addons-911532
	  Normal   BackOff    2m6s                  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m6s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    112s (x2 over 3m42s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     4s (x2 over 2m7s)     kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4s (x2 over 2m7s)     kubelet            Error: ErrImagePull
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-911532/192.168.39.179
	Start Time:       Mon, 29 Sep 2025 10:23:20 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8z2x6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-8z2x6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m25s                default-scheduler  Successfully assigned default/task-pv-pod to addons-911532
	  Warning  Failed     96s (x2 over 3m22s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     96s (x2 over 3m22s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    84s (x2 over 3m22s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     84s (x2 over 3m22s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    72s (x3 over 4m24s)  kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g6jzv (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-g6jzv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-8bg4m" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xljfq" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-937c6346-84b7-4f57-ba02-2f7990d0e2d0" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-911532 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-8bg4m ingress-nginx-admission-patch-xljfq helper-pod-create-pvc-937c6346-84b7-4f57-ba02-2f7990d0e2d0: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-911532 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-911532 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m3.649284739s)
--- FAIL: TestAddons/parallel/LocalPath (366.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-960153 --alsologtostderr -v=1]
E0929 10:38:06.019803    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:38:46.981976    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:40:08.904075    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:42:25.042815    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:42:52.745474    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-960153 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-960153 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-960153 --alsologtostderr -v=1] stderr:
I0929 10:37:54.267978   20881 out.go:360] Setting OutFile to fd 1 ...
I0929 10:37:54.268266   20881 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:37:54.268274   20881 out.go:374] Setting ErrFile to fd 2...
I0929 10:37:54.268278   20881 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:37:54.268455   20881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
I0929 10:37:54.268716   20881 mustload.go:65] Loading cluster: functional-960153
I0929 10:37:54.269062   20881 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:37:54.269503   20881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:37:54.269561   20881 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:37:54.283513   20881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46101
I0929 10:37:54.283981   20881 main.go:141] libmachine: () Calling .GetVersion
I0929 10:37:54.284513   20881 main.go:141] libmachine: Using API Version  1
I0929 10:37:54.284535   20881 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:37:54.284957   20881 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:37:54.285155   20881 main.go:141] libmachine: (functional-960153) Calling .GetState
I0929 10:37:54.286872   20881 host.go:66] Checking if "functional-960153" exists ...
I0929 10:37:54.287140   20881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:37:54.287172   20881 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:37:54.302619   20881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43949
I0929 10:37:54.303034   20881 main.go:141] libmachine: () Calling .GetVersion
I0929 10:37:54.303391   20881 main.go:141] libmachine: Using API Version  1
I0929 10:37:54.303410   20881 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:37:54.303720   20881 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:37:54.303896   20881 main.go:141] libmachine: (functional-960153) Calling .DriverName
I0929 10:37:54.304079   20881 api_server.go:166] Checking apiserver status ...
I0929 10:37:54.304119   20881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0929 10:37:54.304149   20881 main.go:141] libmachine: (functional-960153) Calling .GetSSHHostname
I0929 10:37:54.306768   20881 main.go:141] libmachine: (functional-960153) DBG | domain functional-960153 has defined MAC address 52:54:00:7e:92:06 in network mk-functional-960153
I0929 10:37:54.307261   20881 main.go:141] libmachine: (functional-960153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:92:06", ip: ""} in network mk-functional-960153: {Iface:virbr1 ExpiryTime:2025-09-29 11:34:45 +0000 UTC Type:0 Mac:52:54:00:7e:92:06 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:functional-960153 Clientid:01:52:54:00:7e:92:06}
I0929 10:37:54.307286   20881 main.go:141] libmachine: (functional-960153) DBG | domain functional-960153 has defined IP address 192.168.39.210 and MAC address 52:54:00:7e:92:06 in network mk-functional-960153
I0929 10:37:54.307465   20881 main.go:141] libmachine: (functional-960153) Calling .GetSSHPort
I0929 10:37:54.307659   20881 main.go:141] libmachine: (functional-960153) Calling .GetSSHKeyPath
I0929 10:37:54.307837   20881 main.go:141] libmachine: (functional-960153) Calling .GetSSHUsername
I0929 10:37:54.307980   20881 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/functional-960153/id_rsa Username:docker}
I0929 10:37:54.402689   20881 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6106/cgroup
W0929 10:37:54.415554   20881 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6106/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0929 10:37:54.415631   20881 ssh_runner.go:195] Run: ls
I0929 10:37:54.421091   20881 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8441/healthz ...
I0929 10:37:54.426735   20881 api_server.go:279] https://192.168.39.210:8441/healthz returned 200:
ok
W0929 10:37:54.426783   20881 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0929 10:37:54.426950   20881 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:37:54.426969   20881 addons.go:69] Setting dashboard=true in profile "functional-960153"
I0929 10:37:54.426975   20881 addons.go:238] Setting addon dashboard=true in "functional-960153"
I0929 10:37:54.427000   20881 host.go:66] Checking if "functional-960153" exists ...
I0929 10:37:54.427432   20881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:37:54.427480   20881 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:37:54.440650   20881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34043
I0929 10:37:54.441070   20881 main.go:141] libmachine: () Calling .GetVersion
I0929 10:37:54.441495   20881 main.go:141] libmachine: Using API Version  1
I0929 10:37:54.441516   20881 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:37:54.441822   20881 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:37:54.442248   20881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:37:54.442289   20881 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:37:54.454724   20881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37613
I0929 10:37:54.455121   20881 main.go:141] libmachine: () Calling .GetVersion
I0929 10:37:54.455466   20881 main.go:141] libmachine: Using API Version  1
I0929 10:37:54.455484   20881 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:37:54.455831   20881 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:37:54.456015   20881 main.go:141] libmachine: (functional-960153) Calling .GetState
I0929 10:37:54.457587   20881 main.go:141] libmachine: (functional-960153) Calling .DriverName
I0929 10:37:54.459849   20881 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0929 10:37:54.461031   20881 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0929 10:37:54.462123   20881 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0929 10:37:54.462142   20881 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0929 10:37:54.462157   20881 main.go:141] libmachine: (functional-960153) Calling .GetSSHHostname
I0929 10:37:54.464753   20881 main.go:141] libmachine: (functional-960153) DBG | domain functional-960153 has defined MAC address 52:54:00:7e:92:06 in network mk-functional-960153
I0929 10:37:54.465104   20881 main.go:141] libmachine: (functional-960153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:92:06", ip: ""} in network mk-functional-960153: {Iface:virbr1 ExpiryTime:2025-09-29 11:34:45 +0000 UTC Type:0 Mac:52:54:00:7e:92:06 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:functional-960153 Clientid:01:52:54:00:7e:92:06}
I0929 10:37:54.465135   20881 main.go:141] libmachine: (functional-960153) DBG | domain functional-960153 has defined IP address 192.168.39.210 and MAC address 52:54:00:7e:92:06 in network mk-functional-960153
I0929 10:37:54.465230   20881 main.go:141] libmachine: (functional-960153) Calling .GetSSHPort
I0929 10:37:54.465427   20881 main.go:141] libmachine: (functional-960153) Calling .GetSSHKeyPath
I0929 10:37:54.465578   20881 main.go:141] libmachine: (functional-960153) Calling .GetSSHUsername
I0929 10:37:54.465706   20881 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/functional-960153/id_rsa Username:docker}
I0929 10:37:54.561485   20881 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0929 10:37:54.561512   20881 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0929 10:37:54.582271   20881 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0929 10:37:54.582296   20881 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0929 10:37:54.604073   20881 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0929 10:37:54.604097   20881 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0929 10:37:54.629723   20881 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0929 10:37:54.629743   20881 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0929 10:37:54.651734   20881 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0929 10:37:54.651758   20881 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0929 10:37:54.673195   20881 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0929 10:37:54.673222   20881 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0929 10:37:54.696250   20881 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0929 10:37:54.696278   20881 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0929 10:37:54.717730   20881 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0929 10:37:54.717768   20881 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0929 10:37:54.738114   20881 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0929 10:37:54.738140   20881 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0929 10:37:54.759488   20881 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0929 10:37:55.462076   20881 main.go:141] libmachine: Making call to close driver server
I0929 10:37:55.462179   20881 main.go:141] libmachine: (functional-960153) Calling .Close
I0929 10:37:55.462494   20881 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:37:55.462509   20881 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 10:37:55.462517   20881 main.go:141] libmachine: Making call to close driver server
I0929 10:37:55.462524   20881 main.go:141] libmachine: (functional-960153) Calling .Close
I0929 10:37:55.462762   20881 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:37:55.462809   20881 main.go:141] libmachine: (functional-960153) DBG | Closing plugin on server side
I0929 10:37:55.462830   20881 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 10:37:55.464292   20881 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-960153 addons enable metrics-server

                                                
                                                
I0929 10:37:55.465564   20881 addons.go:201] Writing out "functional-960153" config to set dashboard=true...
W0929 10:37:55.465814   20881 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0929 10:37:55.466471   20881 kapi.go:59] client config for functional-960153: &rest.Config{Host:"https://192.168.39.210:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt", KeyFile:"/home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.key", CAFile:"/home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0929 10:37:55.466909   20881 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0929 10:37:55.466923   20881 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0929 10:37:55.466928   20881 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0929 10:37:55.466932   20881 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0929 10:37:55.466935   20881 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0929 10:37:55.475655   20881 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  876540e2-620e-4110-be31-e80538accbea 826 0 2025-09-29 10:37:55 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-29 10:37:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.102.170.200,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.102.170.200],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0929 10:37:55.475779   20881 out.go:285] * Launching proxy ...
* Launching proxy ...
I0929 10:37:55.475831   20881 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-960153 proxy --port 36195]
I0929 10:37:55.476118   20881 dashboard.go:157] Waiting for kubectl to output host:port ...
I0929 10:37:55.520450   20881 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0929 10:37:55.520491   20881 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0929 10:37:55.531028   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[90efc0cc-1ae4-43a2-85ba-5a2fdef247cc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:55 GMT]] Body:0xc000099400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e8640 TLS:<nil>}
I0929 10:37:55.531121   20881 retry.go:31] will retry after 77.495µs: Temporary Error: unexpected response code: 503
I0929 10:37:55.534820   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[34ce8053-558d-4f6f-86cb-7823d8f1c9ba] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:55 GMT]] Body:0xc0008060c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00072f680 TLS:<nil>}
I0929 10:37:55.534873   20881 retry.go:31] will retry after 210.843µs: Temporary Error: unexpected response code: 503
I0929 10:37:55.538204   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f5e691e9-1965-493b-82f2-c430a6e4cc58] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:55 GMT]] Body:0xc0007929c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e8780 TLS:<nil>}
I0929 10:37:55.538253   20881 retry.go:31] will retry after 191.134µs: Temporary Error: unexpected response code: 503
I0929 10:37:55.541291   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6c9cb01e-788f-4747-9a30-8aff8b553e83] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:55 GMT]] Body:0xc000806340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002928c0 TLS:<nil>}
I0929 10:37:55.541385   20881 retry.go:31] will retry after 362.024µs: Temporary Error: unexpected response code: 503
I0929 10:37:55.544447   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c9045799-aa9a-4daf-afe2-61071fa495f2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:55 GMT]] Body:0xc000792b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e88c0 TLS:<nil>}
I0929 10:37:55.544491   20881 retry.go:31] will retry after 497.263µs: Temporary Error: unexpected response code: 503
I0929 10:37:55.553723   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2564d96e-912a-41df-adeb-720d65a327d8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:55 GMT]] Body:0xc0008064c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292a00 TLS:<nil>}
I0929 10:37:55.553781   20881 retry.go:31] will retry after 685.961µs: Temporary Error: unexpected response code: 503
I0929 10:37:55.557303   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3a57aaeb-548d-4a56-881d-a7229eddf92e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:55 GMT]] Body:0xc000099700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e8a00 TLS:<nil>}
I0929 10:37:55.557338   20881 retry.go:31] will retry after 1.678701ms: Temporary Error: unexpected response code: 503
I0929 10:37:55.561828   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cad22434-5534-4749-b3c1-9a818668f3d6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:55 GMT]] Body:0xc000792c40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00072f7c0 TLS:<nil>}
I0929 10:37:55.561857   20881 retry.go:31] will retry after 1.981999ms: Temporary Error: unexpected response code: 503
I0929 10:37:55.565908   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7f6617e5-74a4-43ac-a3ff-1b868a058238] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:55 GMT]] Body:0xc000806640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292c80 TLS:<nil>}
I0929 10:37:55.565944   20881 retry.go:31] will retry after 2.704964ms: Temporary Error: unexpected response code: 503
I0929 10:37:55.571201   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[286e199b-ddb7-4d54-bb43-616bd461e1cb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:55 GMT]] Body:0xc000806740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e8b40 TLS:<nil>}
I0929 10:37:55.571249   20881 retry.go:31] will retry after 3.344895ms: Temporary Error: unexpected response code: 503
I0929 10:37:55.577805   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2e02df7e-3f9e-434b-9d44-6f9931a14b1d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:55 GMT]] Body:0xc000099840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e8dc0 TLS:<nil>}
I0929 10:37:55.577844   20881 retry.go:31] will retry after 8.016644ms: Temporary Error: unexpected response code: 503
I0929 10:37:55.587984   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[db850f1d-e985-475a-aaa2-b133dd50939d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:55 GMT]] Body:0xc000792d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00072f900 TLS:<nil>}
I0929 10:37:55.588014   20881 retry.go:31] will retry after 5.395581ms: Temporary Error: unexpected response code: 503
I0929 10:37:55.596443   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d2b4634e-8df2-4fb6-b062-3f5e7aa3bfbf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:55 GMT]] Body:0xc000806900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292dc0 TLS:<nil>}
I0929 10:37:55.596482   20881 retry.go:31] will retry after 11.106498ms: Temporary Error: unexpected response code: 503
I0929 10:37:55.609991   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b3e8e1fb-1d59-4be0-943a-116b21be96a8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:55 GMT]] Body:0xc000792ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e8f00 TLS:<nil>}
I0929 10:37:55.610028   20881 retry.go:31] will retry after 23.697912ms: Temporary Error: unexpected response code: 503
I0929 10:37:55.637895   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[735b26c3-4eff-4bd3-a698-d3a18afb0547] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:55 GMT]] Body:0xc0000999c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292f00 TLS:<nil>}
I0929 10:37:55.637955   20881 retry.go:31] will retry after 21.593447ms: Temporary Error: unexpected response code: 503
I0929 10:37:55.666779   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a8feaf9b-988d-40a8-8e7a-9e8a101a6634] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:55 GMT]] Body:0xc000793000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00072fa40 TLS:<nil>}
I0929 10:37:55.666837   20881 retry.go:31] will retry after 34.308611ms: Temporary Error: unexpected response code: 503
I0929 10:37:55.708056   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[94883c6d-70f3-4bd2-a173-33b0d337458c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:55 GMT]] Body:0xc000099c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293040 TLS:<nil>}
I0929 10:37:55.708139   20881 retry.go:31] will retry after 47.476678ms: Temporary Error: unexpected response code: 503
I0929 10:37:55.761070   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b816a5dd-4abd-49fe-b0fa-b73758741c2b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:55 GMT]] Body:0xc000793100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00072fb80 TLS:<nil>}
I0929 10:37:55.761158   20881 retry.go:31] will retry after 57.656755ms: Temporary Error: unexpected response code: 503
I0929 10:37:55.824516   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[686d74ce-4b8e-415b-bc51-608947264acf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:55 GMT]] Body:0xc000806a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293180 TLS:<nil>}
I0929 10:37:55.824584   20881 retry.go:31] will retry after 117.347455ms: Temporary Error: unexpected response code: 503
I0929 10:37:55.945926   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0ea0773f-79cd-472d-af25-3e25ca7aa0ae] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:55 GMT]] Body:0xc000793200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e9040 TLS:<nil>}
I0929 10:37:55.946006   20881 retry.go:31] will retry after 317.672962ms: Temporary Error: unexpected response code: 503
I0929 10:37:56.267484   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b195c075-4a94-42df-87ff-7c1de625f25a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:56 GMT]] Body:0xc000806b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002932c0 TLS:<nil>}
I0929 10:37:56.267546   20881 retry.go:31] will retry after 275.016854ms: Temporary Error: unexpected response code: 503
I0929 10:37:56.546668   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cd3e6ed5-819e-4b45-aa00-d89f9c03fb03] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:56 GMT]] Body:0xc0017020c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e9180 TLS:<nil>}
I0929 10:37:56.546751   20881 retry.go:31] will retry after 686.736815ms: Temporary Error: unexpected response code: 503
I0929 10:37:57.236829   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8f81812c-63fb-4655-84b7-d068745cc70c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:57 GMT]] Body:0xc000806c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00072fcc0 TLS:<nil>}
I0929 10:37:57.236894   20881 retry.go:31] will retry after 604.120194ms: Temporary Error: unexpected response code: 503
I0929 10:37:57.845097   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8c0b8d38-2f51-4b23-ac09-708f6b28c2ab] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:57 GMT]] Body:0xc000806d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e92c0 TLS:<nil>}
I0929 10:37:57.845172   20881 retry.go:31] will retry after 872.989935ms: Temporary Error: unexpected response code: 503
I0929 10:37:58.722559   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f0dde535-2931-4e01-a760-5447dd1d6a4b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:37:58 GMT]] Body:0xc000806ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015d2000 TLS:<nil>}
I0929 10:37:58.722615   20881 retry.go:31] will retry after 2.358442776s: Temporary Error: unexpected response code: 503
I0929 10:38:01.085560   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fbea530a-26ba-4b6f-8e36-2b972e8fe8e4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:38:01 GMT]] Body:0xc0017021c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001572000 TLS:<nil>}
I0929 10:38:01.085630   20881 retry.go:31] will retry after 2.43682441s: Temporary Error: unexpected response code: 503
I0929 10:38:03.528740   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b32a30ce-d4a6-4830-a3af-bd6f0e210841] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:38:03 GMT]] Body:0xc00072a2c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00072fe00 TLS:<nil>}
I0929 10:38:03.528847   20881 retry.go:31] will retry after 4.563972786s: Temporary Error: unexpected response code: 503
I0929 10:38:08.097820   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6e876b77-ac06-4ef6-923e-9e63d303d4e6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:38:08 GMT]] Body:0xc000807000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001572140 TLS:<nil>}
I0929 10:38:08.097882   20881 retry.go:31] will retry after 8.243104765s: Temporary Error: unexpected response code: 503
I0929 10:38:16.344897   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c99d487e-053f-4d3a-816c-3b0991c15428] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:38:16 GMT]] Body:0xc001702280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001572280 TLS:<nil>}
I0929 10:38:16.344991   20881 retry.go:31] will retry after 7.000401213s: Temporary Error: unexpected response code: 503
I0929 10:38:23.352477   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[47e29a5e-a289-4cd8-a9ee-d76e3b326d4d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:38:23 GMT]] Body:0xc000807080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00171c000 TLS:<nil>}
I0929 10:38:23.352544   20881 retry.go:31] will retry after 17.435505169s: Temporary Error: unexpected response code: 503
I0929 10:38:40.792578   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a987b660-dddd-43db-a1b2-7c6b9f67ced7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:38:40 GMT]] Body:0xc001702340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015d2140 TLS:<nil>}
I0929 10:38:40.792647   20881 retry.go:31] will retry after 26.818903969s: Temporary Error: unexpected response code: 503
I0929 10:39:07.615979   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[011dd59f-dc99-4d7a-8c0f-30eee4caa51a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:39:07 GMT]] Body:0xc001702400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00171c140 TLS:<nil>}
I0929 10:39:07.616055   20881 retry.go:31] will retry after 33.193377987s: Temporary Error: unexpected response code: 503
I0929 10:39:40.813539   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a5d754e4-950a-4669-91c2-9e8cfc596903] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:39:40 GMT]] Body:0xc0008071c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015723c0 TLS:<nil>}
I0929 10:39:40.813621   20881 retry.go:31] will retry after 28.52117187s: Temporary Error: unexpected response code: 503
I0929 10:40:09.342326   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ba3cc806-6e3f-4c3d-8a3a-54f06a9778ac] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:40:09 GMT]] Body:0xc0017020c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015d2280 TLS:<nil>}
I0929 10:40:09.342431   20881 retry.go:31] will retry after 1m25.720568423s: Temporary Error: unexpected response code: 503
I0929 10:41:35.068079   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[779aa88c-87b2-4c29-bc69-13ac45a29e39] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:41:35 GMT]] Body:0xc0017021c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001ca3c0 TLS:<nil>}
I0929 10:41:35.068139   20881 retry.go:31] will retry after 46.370095811s: Temporary Error: unexpected response code: 503
I0929 10:42:21.443410   20881 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e4d38621-c4b9-473e-8015-56568fbd03e8] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:42:21 GMT]] Body:0xc000806080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015d23c0 TLS:<nil>}
I0929 10:42:21.443478   20881 retry.go:31] will retry after 1m22.113293772s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-960153 -n functional-960153
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-960153 logs -n 25: (1.54480372s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                ARGS                                                                 │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image     │ functional-960153 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr              │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ image     │ functional-960153 image ls                                                                                                          │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ image     │ functional-960153 image save --daemon kicbase/echo-server:functional-960153 --alsologtostderr                                       │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ start     │ -p functional-960153 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ addons    │ functional-960153 addons list                                                                                                       │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ addons    │ functional-960153 addons list -o json                                                                                               │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ start     │ -p functional-960153 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ start     │ -p functional-960153 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false           │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ ssh       │ functional-960153 ssh stat /mount-9p/created-by-test                                                                                │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh       │ functional-960153 ssh stat /mount-9p/created-by-pod                                                                                 │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh       │ functional-960153 ssh sudo umount -f /mount-9p                                                                                      │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh       │ functional-960153 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ mount     │ -p functional-960153 /tmp/TestFunctionalparallelMountCmdspecific-port2145470094/001:/mount-9p --alsologtostderr -v=1 --port 46464   │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ ssh       │ functional-960153 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh       │ functional-960153 ssh -- ls -la /mount-9p                                                                                           │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh       │ functional-960153 ssh sudo umount -f /mount-9p                                                                                      │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ mount     │ -p functional-960153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3951226267/001:/mount1 --alsologtostderr -v=1                  │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ mount     │ -p functional-960153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3951226267/001:/mount3 --alsologtostderr -v=1                  │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ ssh       │ functional-960153 ssh findmnt -T /mount1                                                                                            │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ mount     │ -p functional-960153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3951226267/001:/mount2 --alsologtostderr -v=1                  │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ ssh       │ functional-960153 ssh findmnt -T /mount1                                                                                            │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh       │ functional-960153 ssh findmnt -T /mount2                                                                                            │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh       │ functional-960153 ssh findmnt -T /mount3                                                                                            │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ mount     │ -p functional-960153 --kill=true                                                                                                    │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-960153 --alsologtostderr -v=1                                                                      │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:37:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:37:23.628690   20273 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:37:23.628794   20273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:37:23.628800   20273 out.go:374] Setting ErrFile to fd 2...
	I0929 10:37:23.628807   20273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:37:23.629009   20273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
	I0929 10:37:23.629457   20273 out.go:368] Setting JSON to false
	I0929 10:37:23.630479   20273 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1189,"bootTime":1759141055,"procs":255,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:37:23.630563   20273 start.go:140] virtualization: kvm guest
	I0929 10:37:23.632590   20273 out.go:179] * [functional-960153] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:37:23.633856   20273 notify.go:220] Checking for updates...
	I0929 10:37:23.633923   20273 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 10:37:23.635308   20273 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:37:23.636756   20273 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 10:37:23.638012   20273 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 10:37:23.639149   20273 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:37:23.640480   20273 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:37:23.642081   20273 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:37:23.642490   20273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:37:23.642535   20273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:37:23.655561   20273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35897
	I0929 10:37:23.656012   20273 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:37:23.656519   20273 main.go:141] libmachine: Using API Version  1
	I0929 10:37:23.656539   20273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:37:23.656902   20273 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:37:23.657084   20273 main.go:141] libmachine: (functional-960153) Calling .DriverName
	I0929 10:37:23.657386   20273 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:37:23.657733   20273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:37:23.657770   20273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:37:23.671036   20273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
	I0929 10:37:23.671450   20273 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:37:23.671847   20273 main.go:141] libmachine: Using API Version  1
	I0929 10:37:23.671862   20273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:37:23.672189   20273 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:37:23.672387   20273 main.go:141] libmachine: (functional-960153) Calling .DriverName
	I0929 10:37:23.703728   20273 out.go:179] * Using the kvm2 driver based on existing profile
	I0929 10:37:23.705004   20273 start.go:304] selected driver: kvm2
	I0929 10:37:23.705017   20273 start.go:924] validating driver "kvm2" against &{Name:functional-960153 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:functional-960153 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:37:23.705119   20273 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:37:23.706327   20273 cni.go:84] Creating CNI manager for ""
	I0929 10:37:23.706410   20273 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 10:37:23.706467   20273 start.go:348] cluster config:
	{Name:functional-960153 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-960153 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:37:23.707821   20273 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.067933790Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759142575067909618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8aeccbc-8172-422c-bcd1-41885d6b664f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.068482412Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d6f3bed-4bad-4bb3-89e4-4758bce107f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.068536427Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d6f3bed-4bad-4bb3-89e4-4758bce107f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.068820506Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bce2ed56f12be0eae070eadfc38e29f357517cad2fd6165ab487c6120688d9b,PodSandboxId:b441e09c6ef2d85697a1766bd612b4bc9f280229f01444a0a2bf5bce9cb85d1a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759142267674479435,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 818a1168-13eb-40e5-a11e-ed073c8ca85f,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cded293cdc57ead1f28a98da4250d1baf57a2d59a9a93f1d3ee2372dd051ef9b,PodSandboxId:edaa6178cac15b67b074c8c50398d9aad7f133dc6c25e535430bd7a0ce288991,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759142213671533532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c35b0ae21a19f783260f4ba368b53beb2fca7d75595d46d397886bc1018a11,PodSandboxId:6aaf8d34752c12e0e82b95635ab96099dccecef966383a32e03cc2511abd751b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759142213393501916,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2c5f49c9d29c0f3ad3c29c93e2fa675c3c78f618b93494189ca0e15d4171ad6,PodSandboxId:e090db4eef6fe986ce3cca0412b997b8f49d951f4538cae30223710ed8bb293b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759142213363502977,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4deb47b3c02873f7ea4b7a1d04550ee0b7d35c8ea854d454513b6e2cbf954c75,PodSandboxId:d03cfb836c6eb825dfd36aeff2559674feffef4d71647a4fa5e40841f7caa6d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759142208644604932,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012a907cd467a90f54ee8123eaaa32be,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3e6e3f4b5ff111dd1a8ac7df7c60d120ddc205a5b69aeeb209e487a8e405bf,PodSandboxId:4712c91e647ceaf8f356de2bbf7458284f050c978c523cfc8ad352aa21e1d4f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c766
0634,State:CONTAINER_RUNNING,CreatedAt:1759142208594419867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6959f01174e974ade40c3bfc16a814dfb166bdc4f86d4036198ad04d3c51951b,PodSandboxId:8bb4cbce3d4d8ab85fb40f35ec5dc3953224be17b5a81fa59525219e48857513,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759142208583538318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787caf5fb5ad1e85135ce6b6eed843c8946cc916e1dea741a37bf72c666360ad,PodSandboxId:28f527c20559fbf462b7e6f663362919ff57950165ff0336c8ad8d31761fb58f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,}
,Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759142208563010293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a55ba8fa0366e66f40d41eee3f65187820205a686f93eba5c1898309806407,PodSandboxId:4c4858d2471eff7566113f9c7c7352ad8f4b
ff95ac40341dc662526bed7fe51f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759142168428147313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0b6a8579d1174446415645dfdbe88cb1e73c10668c2e2916710fdd235bbffc,PodSandboxId:a573144cc0c0bc839091045888712713e75cb8309f6d89841
454c97d272220e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759142164814577260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4db117533d3015e6a9220366ff1ceefe1f010bcdf2f8570c4f
92873db68b73cd,PodSandboxId:9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759142164800803103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed5dd4c866f42be3e3de5673948bc41285cc1aa68080ca19b9cc4db61be112a,PodSandboxId:d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759142164772706634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683119da9f16b7dcd89cbe4f5b4cb2d5be00d01afd630abc745ea2e4a5909caa,PodSandboxId:a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759142160978984238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0581d84242ded174512127eb8e83baa3cfc5507e63f9a35e26d78ee58e66d0,PodSandboxId:e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759142160182540582,Lab
els:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4d6f3bed-4bad-4bb3-89e4-4758bce107f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.115072965Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fc03b8e-37a0-4924-98f6-d39a286f4427 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.115372278Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fc03b8e-37a0-4924-98f6-d39a286f4427 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.116871352Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c103aedd-ae7f-4ba1-a52b-82e9d78a4014 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.117668948Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759142575117647035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c103aedd-ae7f-4ba1-a52b-82e9d78a4014 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.118232261Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea2fd14e-b37e-4dfd-a2de-a437c605f62b name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.118306214Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea2fd14e-b37e-4dfd-a2de-a437c605f62b name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.118586627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bce2ed56f12be0eae070eadfc38e29f357517cad2fd6165ab487c6120688d9b,PodSandboxId:b441e09c6ef2d85697a1766bd612b4bc9f280229f01444a0a2bf5bce9cb85d1a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759142267674479435,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 818a1168-13eb-40e5-a11e-ed073c8ca85f,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cded293cdc57ead1f28a98da4250d1baf57a2d59a9a93f1d3ee2372dd051ef9b,PodSandboxId:edaa6178cac15b67b074c8c50398d9aad7f133dc6c25e535430bd7a0ce288991,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759142213671533532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c35b0ae21a19f783260f4ba368b53beb2fca7d75595d46d397886bc1018a11,PodSandboxId:6aaf8d34752c12e0e82b95635ab96099dccecef966383a32e03cc2511abd751b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759142213393501916,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2c5f49c9d29c0f3ad3c29c93e2fa675c3c78f618b93494189ca0e15d4171ad6,PodSandboxId:e090db4eef6fe986ce3cca0412b997b8f49d951f4538cae30223710ed8bb293b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759142213363502977,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4deb47b3c02873f7ea4b7a1d04550ee0b7d35c8ea854d454513b6e2cbf954c75,PodSandboxId:d03cfb836c6eb825dfd36aeff2559674feffef4d71647a4fa5e40841f7caa6d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759142208644604932,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012a907cd467a90f54ee8123eaaa32be,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3e6e3f4b5ff111dd1a8ac7df7c60d120ddc205a5b69aeeb209e487a8e405bf,PodSandboxId:4712c91e647ceaf8f356de2bbf7458284f050c978c523cfc8ad352aa21e1d4f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c766
0634,State:CONTAINER_RUNNING,CreatedAt:1759142208594419867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6959f01174e974ade40c3bfc16a814dfb166bdc4f86d4036198ad04d3c51951b,PodSandboxId:8bb4cbce3d4d8ab85fb40f35ec5dc3953224be17b5a81fa59525219e48857513,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759142208583538318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787caf5fb5ad1e85135ce6b6eed843c8946cc916e1dea741a37bf72c666360ad,PodSandboxId:28f527c20559fbf462b7e6f663362919ff57950165ff0336c8ad8d31761fb58f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,}
,Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759142208563010293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a55ba8fa0366e66f40d41eee3f65187820205a686f93eba5c1898309806407,PodSandboxId:4c4858d2471eff7566113f9c7c7352ad8f4b
ff95ac40341dc662526bed7fe51f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759142168428147313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0b6a8579d1174446415645dfdbe88cb1e73c10668c2e2916710fdd235bbffc,PodSandboxId:a573144cc0c0bc839091045888712713e75cb8309f6d89841
454c97d272220e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759142164814577260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4db117533d3015e6a9220366ff1ceefe1f010bcdf2f8570c4f
92873db68b73cd,PodSandboxId:9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759142164800803103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed5dd4c866f42be3e3de5673948bc41285cc1aa68080ca19b9cc4db61be112a,PodSandboxId:d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759142164772706634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683119da9f16b7dcd89cbe4f5b4cb2d5be00d01afd630abc745ea2e4a5909caa,PodSandboxId:a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759142160978984238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0581d84242ded174512127eb8e83baa3cfc5507e63f9a35e26d78ee58e66d0,PodSandboxId:e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759142160182540582,Lab
els:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea2fd14e-b37e-4dfd-a2de-a437c605f62b name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.155777592Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf988ca4-69fb-4d9c-8a8f-dc37113333bc name=/runtime.v1.RuntimeService/Version
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.156021253Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf988ca4-69fb-4d9c-8a8f-dc37113333bc name=/runtime.v1.RuntimeService/Version
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.157070426Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81ac199e-25cf-4740-a9c9-3f688f85a278 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.157780540Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759142575157759773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81ac199e-25cf-4740-a9c9-3f688f85a278 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.158873653Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=445695bc-cae8-48ad-9200-c3161e0411be name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.159422713Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=445695bc-cae8-48ad-9200-c3161e0411be name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.160339040Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bce2ed56f12be0eae070eadfc38e29f357517cad2fd6165ab487c6120688d9b,PodSandboxId:b441e09c6ef2d85697a1766bd612b4bc9f280229f01444a0a2bf5bce9cb85d1a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759142267674479435,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 818a1168-13eb-40e5-a11e-ed073c8ca85f,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cded293cdc57ead1f28a98da4250d1baf57a2d59a9a93f1d3ee2372dd051ef9b,PodSandboxId:edaa6178cac15b67b074c8c50398d9aad7f133dc6c25e535430bd7a0ce288991,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759142213671533532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c35b0ae21a19f783260f4ba368b53beb2fca7d75595d46d397886bc1018a11,PodSandboxId:6aaf8d34752c12e0e82b95635ab96099dccecef966383a32e03cc2511abd751b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759142213393501916,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2c5f49c9d29c0f3ad3c29c93e2fa675c3c78f618b93494189ca0e15d4171ad6,PodSandboxId:e090db4eef6fe986ce3cca0412b997b8f49d951f4538cae30223710ed8bb293b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759142213363502977,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4deb47b3c02873f7ea4b7a1d04550ee0b7d35c8ea854d454513b6e2cbf954c75,PodSandboxId:d03cfb836c6eb825dfd36aeff2559674feffef4d71647a4fa5e40841f7caa6d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759142208644604932,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012a907cd467a90f54ee8123eaaa32be,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3e6e3f4b5ff111dd1a8ac7df7c60d120ddc205a5b69aeeb209e487a8e405bf,PodSandboxId:4712c91e647ceaf8f356de2bbf7458284f050c978c523cfc8ad352aa21e1d4f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c766
0634,State:CONTAINER_RUNNING,CreatedAt:1759142208594419867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6959f01174e974ade40c3bfc16a814dfb166bdc4f86d4036198ad04d3c51951b,PodSandboxId:8bb4cbce3d4d8ab85fb40f35ec5dc3953224be17b5a81fa59525219e48857513,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759142208583538318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787caf5fb5ad1e85135ce6b6eed843c8946cc916e1dea741a37bf72c666360ad,PodSandboxId:28f527c20559fbf462b7e6f663362919ff57950165ff0336c8ad8d31761fb58f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,}
,Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759142208563010293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a55ba8fa0366e66f40d41eee3f65187820205a686f93eba5c1898309806407,PodSandboxId:4c4858d2471eff7566113f9c7c7352ad8f4b
ff95ac40341dc662526bed7fe51f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759142168428147313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0b6a8579d1174446415645dfdbe88cb1e73c10668c2e2916710fdd235bbffc,PodSandboxId:a573144cc0c0bc839091045888712713e75cb8309f6d89841
454c97d272220e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759142164814577260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4db117533d3015e6a9220366ff1ceefe1f010bcdf2f8570c4f
92873db68b73cd,PodSandboxId:9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759142164800803103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed5dd4c866f42be3e3de5673948bc41285cc1aa68080ca19b9cc4db61be112a,PodSandboxId:d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759142164772706634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683119da9f16b7dcd89cbe4f5b4cb2d5be00d01afd630abc745ea2e4a5909caa,PodSandboxId:a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759142160978984238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0581d84242ded174512127eb8e83baa3cfc5507e63f9a35e26d78ee58e66d0,PodSandboxId:e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759142160182540582,Lab
els:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=445695bc-cae8-48ad-9200-c3161e0411be name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.198985212Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c74ae2c9-3a7c-4654-a3b9-13aa1721ad3c name=/runtime.v1.RuntimeService/Version
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.199075003Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c74ae2c9-3a7c-4654-a3b9-13aa1721ad3c name=/runtime.v1.RuntimeService/Version
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.201060474Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4415932a-e25e-43fd-81b1-a1bbab1e0e10 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.201781227Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759142575201757379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4415932a-e25e-43fd-81b1-a1bbab1e0e10 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.202351858Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32b30f75-2c45-442a-a56c-b9606e2cda10 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.202408283Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32b30f75-2c45-442a-a56c-b9606e2cda10 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:42:55 functional-960153 crio[5468]: time="2025-09-29 10:42:55.202706650Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bce2ed56f12be0eae070eadfc38e29f357517cad2fd6165ab487c6120688d9b,PodSandboxId:b441e09c6ef2d85697a1766bd612b4bc9f280229f01444a0a2bf5bce9cb85d1a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759142267674479435,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 818a1168-13eb-40e5-a11e-ed073c8ca85f,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cded293cdc57ead1f28a98da4250d1baf57a2d59a9a93f1d3ee2372dd051ef9b,PodSandboxId:edaa6178cac15b67b074c8c50398d9aad7f133dc6c25e535430bd7a0ce288991,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759142213671533532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c35b0ae21a19f783260f4ba368b53beb2fca7d75595d46d397886bc1018a11,PodSandboxId:6aaf8d34752c12e0e82b95635ab96099dccecef966383a32e03cc2511abd751b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759142213393501916,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2c5f49c9d29c0f3ad3c29c93e2fa675c3c78f618b93494189ca0e15d4171ad6,PodSandboxId:e090db4eef6fe986ce3cca0412b997b8f49d951f4538cae30223710ed8bb293b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759142213363502977,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4deb47b3c02873f7ea4b7a1d04550ee0b7d35c8ea854d454513b6e2cbf954c75,PodSandboxId:d03cfb836c6eb825dfd36aeff2559674feffef4d71647a4fa5e40841f7caa6d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759142208644604932,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012a907cd467a90f54ee8123eaaa32be,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3e6e3f4b5ff111dd1a8ac7df7c60d120ddc205a5b69aeeb209e487a8e405bf,PodSandboxId:4712c91e647ceaf8f356de2bbf7458284f050c978c523cfc8ad352aa21e1d4f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c766
0634,State:CONTAINER_RUNNING,CreatedAt:1759142208594419867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6959f01174e974ade40c3bfc16a814dfb166bdc4f86d4036198ad04d3c51951b,PodSandboxId:8bb4cbce3d4d8ab85fb40f35ec5dc3953224be17b5a81fa59525219e48857513,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759142208583538318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787caf5fb5ad1e85135ce6b6eed843c8946cc916e1dea741a37bf72c666360ad,PodSandboxId:28f527c20559fbf462b7e6f663362919ff57950165ff0336c8ad8d31761fb58f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,}
,Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759142208563010293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a55ba8fa0366e66f40d41eee3f65187820205a686f93eba5c1898309806407,PodSandboxId:4c4858d2471eff7566113f9c7c7352ad8f4b
ff95ac40341dc662526bed7fe51f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759142168428147313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0b6a8579d1174446415645dfdbe88cb1e73c10668c2e2916710fdd235bbffc,PodSandboxId:a573144cc0c0bc839091045888712713e75cb8309f6d89841
454c97d272220e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759142164814577260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4db117533d3015e6a9220366ff1ceefe1f010bcdf2f8570c4f
92873db68b73cd,PodSandboxId:9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759142164800803103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed5dd4c866f42be3e3de5673948bc41285cc1aa68080ca19b9cc4db61be112a,PodSandboxId:d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759142164772706634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683119da9f16b7dcd89cbe4f5b4cb2d5be00d01afd630abc745ea2e4a5909caa,PodSandboxId:a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759142160978984238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0581d84242ded174512127eb8e83baa3cfc5507e63f9a35e26d78ee58e66d0,PodSandboxId:e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759142160182540582,Lab
els:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32b30f75-2c45-442a-a56c-b9606e2cda10 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2bce2ed56f12b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   b441e09c6ef2d       busybox-mount
	cded293cdc57e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      6 minutes ago       Running             coredns                   2                   edaa6178cac15       coredns-66bc5c9577-ldskd
	32c35b0ae21a1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       3                   6aaf8d34752c1       storage-provisioner
	b2c5f49c9d29c       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      6 minutes ago       Running             kube-proxy                2                   e090db4eef6fe       kube-proxy-wmdfj
	4deb47b3c0287       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      6 minutes ago       Running             kube-apiserver            0                   d03cfb836c6eb       kube-apiserver-functional-960153
	5e3e6e3f4b5ff       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      6 minutes ago       Running             kube-controller-manager   3                   4712c91e647ce       kube-controller-manager-functional-960153
	6959f01174e97       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      6 minutes ago       Running             kube-scheduler            3                   8bb4cbce3d4d8       kube-scheduler-functional-960153
	787caf5fb5ad1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      6 minutes ago       Running             etcd                      3                   28f527c20559f       etcd-functional-960153
	c8a55ba8fa036       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Exited              storage-provisioner       2                   4c4858d2471ef       storage-provisioner
	1b0b6a8579d11       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      6 minutes ago       Exited              etcd                      2                   a573144cc0c0b       etcd-functional-960153
	4db117533d301       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      6 minutes ago       Exited              kube-scheduler            2                   9c20e6f953a18       kube-scheduler-functional-960153
	1ed5dd4c866f4       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      6 minutes ago       Exited              kube-controller-manager   2                   d6b292ebe7e92       kube-controller-manager-functional-960153
	683119da9f16b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      6 minutes ago       Exited              coredns                   1                   a203d4614c54e       coredns-66bc5c9577-ldskd
	2d0581d84242d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      6 minutes ago       Exited              kube-proxy                1                   e55c0fbe79eb9       kube-proxy-wmdfj
	
	
	==> coredns [683119da9f16b7dcd89cbe4f5b4cb2d5be00d01afd630abc745ea2e4a5909caa] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57545 - 19273 "HINFO IN 5553420383368812737.2946601077225657136. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.47213075s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cded293cdc57ead1f28a98da4250d1baf57a2d59a9a93f1d3ee2372dd051ef9b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59083 - 18141 "HINFO IN 5463811549496456981.4073937826615656044. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063852137s
	
	
	==> describe nodes <==
	Name:               functional-960153
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-960153
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c703192fb7638284bed1945941837d6f5d9e8170
	                    minikube.k8s.io/name=functional-960153
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_35_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:35:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-960153
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 10:42:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 10:38:24 +0000   Mon, 29 Sep 2025 10:35:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 10:38:24 +0000   Mon, 29 Sep 2025 10:35:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 10:38:24 +0000   Mon, 29 Sep 2025 10:35:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 10:38:24 +0000   Mon, 29 Sep 2025 10:35:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    functional-960153
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 9f88164a3a16454d87ec4803e7696424
	  System UUID:                9f88164a-3a16-454d-87ec-4803e7696424
	  Boot ID:                    52ac99b4-d685-43b7-aae7-7d644d51c516
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-7d85dfc575-rbtgs           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	  default                     mysql-5bb876957f-9bzpm                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    5m41s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 coredns-66bc5c9577-ldskd                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m42s
	  kube-system                 etcd-functional-960153                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m47s
	  kube-system                 kube-apiserver-functional-960153              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-controller-manager-functional-960153     200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m47s
	  kube-system                 kube-proxy-wmdfj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m42s
	  kube-system                 kube-scheduler-functional-960153              100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m47s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m41s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-hbwbt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vfnm6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m40s                  kube-proxy       
	  Normal  Starting                 6m1s                   kube-proxy       
	  Normal  Starting                 6m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m55s (x8 over 7m55s)  kubelet          Node functional-960153 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m55s (x8 over 7m55s)  kubelet          Node functional-960153 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m55s (x7 over 7m55s)  kubelet          Node functional-960153 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m48s                  kubelet          Starting kubelet.
	  Normal  NodeReady                7m47s                  kubelet          Node functional-960153 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  7m47s                  kubelet          Node functional-960153 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m47s                  kubelet          Node functional-960153 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m47s                  kubelet          Node functional-960153 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m43s                  node-controller  Node functional-960153 event: Registered Node functional-960153 in Controller
	  Normal  NodeHasSufficientPID     6m51s (x7 over 6m51s)  kubelet          Node functional-960153 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m51s (x8 over 6m51s)  kubelet          Node functional-960153 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m51s (x8 over 6m51s)  kubelet          Node functional-960153 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m51s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m44s                  node-controller  Node functional-960153 event: Registered Node functional-960153 in Controller
	  Normal  Starting                 6m8s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m8s (x8 over 6m8s)    kubelet          Node functional-960153 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m8s (x8 over 6m8s)    kubelet          Node functional-960153 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m8s (x7 over 6m8s)    kubelet          Node functional-960153 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m                     node-controller  Node functional-960153 event: Registered Node functional-960153 in Controller
	
	
	==> dmesg <==
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000062] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.009859] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.190779] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085188] kauditd_printk_skb: 1 callbacks suppressed
	[Sep29 10:35] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.138189] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.494668] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.966317] kauditd_printk_skb: 249 callbacks suppressed
	[ +20.548723] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.109948] kauditd_printk_skb: 11 callbacks suppressed
	[Sep29 10:36] kauditd_printk_skb: 337 callbacks suppressed
	[  +0.739809] kauditd_printk_skb: 93 callbacks suppressed
	[ +14.865249] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.108609] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.992645] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.562406] kauditd_printk_skb: 164 callbacks suppressed
	[Sep29 10:37] kauditd_printk_skb: 133 callbacks suppressed
	[  +2.047433] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.000167] kauditd_printk_skb: 68 callbacks suppressed
	[ +23.144420] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.146482] kauditd_printk_skb: 31 callbacks suppressed
	[Sep29 10:39] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [1b0b6a8579d1174446415645dfdbe88cb1e73c10668c2e2916710fdd235bbffc] <==
	{"level":"warn","ts":"2025-09-29T10:36:07.224736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.233985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.242525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.254684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.269603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.279823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.391154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51468","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:36:33.464966Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T10:36:33.465048Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-960153","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.210:2380"],"advertise-client-urls":["https://192.168.39.210:2379"]}
	{"level":"error","ts":"2025-09-29T10:36:33.465141Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T10:36:33.541896Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T10:36:33.543548Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:36:33.543608Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"5a5dd032def1271d","current-leader-member-id":"5a5dd032def1271d"}
	{"level":"info","ts":"2025-09-29T10:36:33.543700Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-29T10:36:33.543743Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-29T10:36:33.543884Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T10:36:33.543974Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T10:36:33.543986Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T10:36:33.544026Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.210:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T10:36:33.544033Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.210:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T10:36:33.544039Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.210:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:36:33.547342Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.210:2380"}
	{"level":"error","ts":"2025-09-29T10:36:33.547405Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.210:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:36:33.547444Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.210:2380"}
	{"level":"info","ts":"2025-09-29T10:36:33.547452Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-960153","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.210:2380"],"advertise-client-urls":["https://192.168.39.210:2379"]}
	
	
	==> etcd [787caf5fb5ad1e85135ce6b6eed843c8946cc916e1dea741a37bf72c666360ad] <==
	{"level":"warn","ts":"2025-09-29T10:36:50.702178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.710496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.718423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.744043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.746453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.770274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.776634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.791479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.817968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.853822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.865483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.882955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.914507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.928089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.946754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.956789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.965927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.981978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.003127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.017773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.030414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.050869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.065427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.095802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.194410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60022","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:42:55 up 8 min,  0 users,  load average: 0.19, 0.31, 0.23
	Linux functional-960153 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4deb47b3c02873f7ea4b7a1d04550ee0b7d35c8ea854d454513b6e2cbf954c75] <==
	I0929 10:36:52.724535       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0929 10:36:52.787576       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0929 10:36:53.490012       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 10:36:53.586387       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 10:36:53.632575       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 10:36:53.648168       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 10:36:55.420498       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 10:36:55.521903       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 10:37:09.576805       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.227.93"}
	I0929 10:37:14.540431       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.250.229"}
	I0929 10:37:14.609964       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 10:37:23.840263       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.209.15"}
	I0929 10:37:54.198010       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:37:55.134922       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 10:37:55.418687       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.170.200"}
	I0929 10:37:55.448292       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.136.103"}
	I0929 10:38:04.450387       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:39:02.514133       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:39:06.401949       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:40:12.302143       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:40:34.124990       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:41:39.596788       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:41:40.694238       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:42:43.038044       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:42:49.727735       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [1ed5dd4c866f42be3e3de5673948bc41285cc1aa68080ca19b9cc4db61be112a] <==
	I0929 10:36:11.372935       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 10:36:11.374520       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 10:36:11.378813       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 10:36:11.379357       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 10:36:11.383757       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 10:36:11.387079       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 10:36:11.390356       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 10:36:11.391517       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 10:36:11.393753       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 10:36:11.393858       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 10:36:11.394918       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:36:11.399254       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 10:36:11.399519       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 10:36:11.400457       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 10:36:11.403754       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 10:36:11.412038       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 10:36:11.412062       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 10:36:11.422408       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 10:36:11.422447       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 10:36:11.422640       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:36:11.422717       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 10:36:11.422741       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 10:36:11.422896       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 10:36:11.424512       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:36:11.425024       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	
	
	==> kube-controller-manager [5e3e6e3f4b5ff111dd1a8ac7df7c60d120ddc205a5b69aeeb209e487a8e405bf] <==
	I0929 10:36:55.304581       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 10:36:55.304597       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 10:36:55.306874       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 10:36:55.317098       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 10:36:55.317620       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 10:36:55.317734       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 10:36:55.318399       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 10:36:55.323169       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 10:36:55.323288       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 10:36:55.325615       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:36:55.325627       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 10:36:55.325633       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 10:36:55.326568       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 10:36:55.326655       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 10:36:55.326762       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-960153"
	I0929 10:36:55.326797       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 10:36:55.327156       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 10:36:55.331436       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	E0929 10:37:55.227948       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.251462       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.257857       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.265371       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.270669       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.275541       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.280040       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [2d0581d84242ded174512127eb8e83baa3cfc5507e63f9a35e26d78ee58e66d0] <==
	E0929 10:36:04.895610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-960153&limit=500&resourceVersion=0\": dial tcp 192.168.39.210:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0929 10:36:10.013346       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:36:10.013415       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.210"]
	E0929 10:36:10.013474       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:36:10.087391       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 10:36:10.087968       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 10:36:10.087999       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:36:10.117148       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:36:10.117840       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:36:10.117855       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:10.128465       1 config.go:200] "Starting service config controller"
	I0929 10:36:10.128494       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:36:10.128511       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:36:10.128515       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:36:10.128524       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:36:10.128526       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:36:10.128861       1 config.go:309] "Starting node config controller"
	I0929 10:36:10.136281       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:36:10.136290       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:36:10.229025       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:36:10.229073       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:36:10.229106       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b2c5f49c9d29c0f3ad3c29c93e2fa675c3c78f618b93494189ca0e15d4171ad6] <==
	I0929 10:36:53.971435       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:36:54.074882       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:36:54.078310       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.210"]
	E0929 10:36:54.082955       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:36:54.232412       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 10:36:54.232522       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 10:36:54.232545       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:36:54.300002       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:36:54.300332       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:36:54.300345       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:54.309617       1 config.go:200] "Starting service config controller"
	I0929 10:36:54.309844       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:36:54.310029       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:36:54.310175       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:36:54.310307       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:36:54.310396       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:36:54.322063       1 config.go:309] "Starting node config controller"
	I0929 10:36:54.358289       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:36:54.358327       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:36:54.411926       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:36:54.411968       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:36:54.412002       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4db117533d3015e6a9220366ff1ceefe1f010bcdf2f8570c4f92873db68b73cd] <==
	I0929 10:36:06.923917       1 serving.go:386] Generated self-signed cert in-memory
	I0929 10:36:08.191074       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 10:36:08.191117       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:08.214607       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0929 10:36:08.214735       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 10:36:08.214795       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:08.214817       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:08.214840       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:08.214855       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:08.216911       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 10:36:08.217318       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 10:36:08.316254       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:08.316542       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:08.317687       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0929 10:36:33.483990       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 10:36:33.488911       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 10:36:33.488954       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:33.488971       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:33.495440       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 10:36:33.495471       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 10:36:33.495508       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [6959f01174e974ade40c3bfc16a814dfb166bdc4f86d4036198ad04d3c51951b] <==
	I0929 10:36:49.551846       1 serving.go:386] Generated self-signed cert in-memory
	I0929 10:36:51.969129       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 10:36:51.969173       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:51.983643       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 10:36:51.983753       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0929 10:36:51.983782       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 10:36:51.983816       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 10:36:51.990405       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:51.990446       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:51.990462       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:51.990468       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:52.083915       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0929 10:36:52.090742       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:52.090863       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 10:41:52 functional-960153 kubelet[5808]: E0929 10:41:52.526598    5808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="8b7cd5d7-e218-4676-822e-cdd046d78a8d"
	Sep 29 10:41:57 functional-960153 kubelet[5808]: E0929 10:41:57.871538    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142517870851165  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:41:57 functional-960153 kubelet[5808]: E0929 10:41:57.871578    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142517870851165  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:42:03 functional-960153 kubelet[5808]: E0929 10:42:03.674749    5808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="8b7cd5d7-e218-4676-822e-cdd046d78a8d"
	Sep 29 10:42:07 functional-960153 kubelet[5808]: E0929 10:42:07.876517    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142527875717728  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:42:07 functional-960153 kubelet[5808]: E0929 10:42:07.876541    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142527875717728  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:42:17 functional-960153 kubelet[5808]: E0929 10:42:17.880287    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142537879457683  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:42:17 functional-960153 kubelet[5808]: E0929 10:42:17.880329    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142537879457683  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:42:27 functional-960153 kubelet[5808]: E0929 10:42:27.881617    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142547881396836  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:42:27 functional-960153 kubelet[5808]: E0929 10:42:27.881644    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142547881396836  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:42:37 functional-960153 kubelet[5808]: E0929 10:42:37.536860    5808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 29 10:42:37 functional-960153 kubelet[5808]: E0929 10:42:37.536998    5808 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 29 10:42:37 functional-960153 kubelet[5808]: E0929 10:42:37.537246    5808 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-rbtgs_default(76bcc9f3-165d-4de2-a963-90eb71d2cdfa): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 10:42:37 functional-960153 kubelet[5808]: E0929 10:42:37.537286    5808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-rbtgs" podUID="76bcc9f3-165d-4de2-a963-90eb71d2cdfa"
	Sep 29 10:42:37 functional-960153 kubelet[5808]: E0929 10:42:37.886070    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142557883730747  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:42:37 functional-960153 kubelet[5808]: E0929 10:42:37.886115    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142557883730747  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:42:47 functional-960153 kubelet[5808]: E0929 10:42:47.771411    5808 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod221fcbdd73ebea579595982187f9964d/crio-9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766: Error finding container 9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766: Status 404 returned error can't find the container with id 9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766
	Sep 29 10:42:47 functional-960153 kubelet[5808]: E0929 10:42:47.771867    5808 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod0c124297-4905-4a35-9473-4bd1b565e373/crio-a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba: Error finding container a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba: Status 404 returned error can't find the container with id a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba
	Sep 29 10:42:47 functional-960153 kubelet[5808]: E0929 10:42:47.772178    5808 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod180156b943983a6e5b8f074dd62185b8/crio-d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360: Error finding container d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360: Status 404 returned error can't find the container with id d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360
	Sep 29 10:42:47 functional-960153 kubelet[5808]: E0929 10:42:47.772509    5808 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod3581457d-4db8-4128-a3eb-f27614ec4c96/crio-4c4858d2471eff7566113f9c7c7352ad8f4bff95ac40341dc662526bed7fe51f: Error finding container 4c4858d2471eff7566113f9c7c7352ad8f4bff95ac40341dc662526bed7fe51f: Status 404 returned error can't find the container with id 4c4858d2471eff7566113f9c7c7352ad8f4bff95ac40341dc662526bed7fe51f
	Sep 29 10:42:47 functional-960153 kubelet[5808]: E0929 10:42:47.772849    5808 manager.go:1116] Failed to create existing container: /kubepods/burstable/podd1bee20c8d58d621b4427e7252264eba/crio-a573144cc0c0bc839091045888712713e75cb8309f6d89841454c97d272220e0: Error finding container a573144cc0c0bc839091045888712713e75cb8309f6d89841454c97d272220e0: Status 404 returned error can't find the container with id a573144cc0c0bc839091045888712713e75cb8309f6d89841454c97d272220e0
	Sep 29 10:42:47 functional-960153 kubelet[5808]: E0929 10:42:47.773097    5808 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod3eca0381-2478-4fd7-8b49-076c58cca999/crio-e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622: Error finding container e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622: Status 404 returned error can't find the container with id e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622
	Sep 29 10:42:47 functional-960153 kubelet[5808]: E0929 10:42:47.888327    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142567887744520  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:42:47 functional-960153 kubelet[5808]: E0929 10:42:47.888357    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142567887744520  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:42:49 functional-960153 kubelet[5808]: E0929 10:42:49.674565    5808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-rbtgs" podUID="76bcc9f3-165d-4de2-a963-90eb71d2cdfa"
	
	
	==> storage-provisioner [32c35b0ae21a19f783260f4ba368b53beb2fca7d75595d46d397886bc1018a11] <==
	W0929 10:42:30.840099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:32.844760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:32.851761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:34.855546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:34.860058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:36.864338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:36.872628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:38.875833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:38.884995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:40.887819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:40.893691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:42.897378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:42.906061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:44.909471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:44.914647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:46.918509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:46.923157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:48.927513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:48.932314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:50.935631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:50.940705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:52.944328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:52.949535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:54.955335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:42:54.967839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c8a55ba8fa0366e66f40d41eee3f65187820205a686f93eba5c1898309806407] <==
	I0929 10:36:08.500670       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 10:36:08.509633       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 10:36:08.509683       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0929 10:36:08.512109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:11.968139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:16.228764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:19.827667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:22.881892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:25.906036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:25.912008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 10:36:25.913089       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 10:36:25.913543       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"25412f40-1675-4ca1-a896-dcfa19247807", APIVersion:"v1", ResourceVersion:"538", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-960153_65f4ca3f-4720-4696-9b70-1b21f4e35fd1 became leader
	I0929 10:36:25.913623       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-960153_65f4ca3f-4720-4696-9b70-1b21f4e35fd1!
	W0929 10:36:25.921899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:25.932418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 10:36:26.013863       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-960153_65f4ca3f-4720-4696-9b70-1b21f4e35fd1!
	W0929 10:36:27.936027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:27.941131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:29.945646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:29.952337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:31.955062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:31.959717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-960153 -n functional-960153
helpers_test.go:269: (dbg) Run:  kubectl --context functional-960153 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-connect-7d85dfc575-rbtgs mysql-5bb876957f-9bzpm sp-pod dashboard-metrics-scraper-77bf4d6c4c-hbwbt kubernetes-dashboard-855c9754f9-vfnm6
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-960153 describe pod busybox-mount hello-node-connect-7d85dfc575-rbtgs mysql-5bb876957f-9bzpm sp-pod dashboard-metrics-scraper-77bf4d6c4c-hbwbt kubernetes-dashboard-855c9754f9-vfnm6
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-960153 describe pod busybox-mount hello-node-connect-7d85dfc575-rbtgs mysql-5bb876957f-9bzpm sp-pod dashboard-metrics-scraper-77bf4d6c4c-hbwbt kubernetes-dashboard-855c9754f9-vfnm6: exit status 1 (91.053819ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960153/192.168.39.210
	Start Time:       Mon, 29 Sep 2025 10:37:17 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://2bce2ed56f12be0eae070eadfc38e29f357517cad2fd6165ab487c6120688d9b
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 10:37:47 +0000
	      Finished:     Mon, 29 Sep 2025 10:37:47 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b7v9g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-b7v9g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m39s  default-scheduler  Successfully assigned default/busybox-mount to functional-960153
	  Normal  Pulling    5m39s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m9s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.167s (29.766s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m9s   kubelet            Created container: mount-munger
	  Normal  Started    5m9s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-connect-7d85dfc575-rbtgs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960153/192.168.39.210
	Start Time:       Mon, 29 Sep 2025 10:37:23 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zd4fw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zd4fw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m32s                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-rbtgs to functional-960153
	  Warning  Failed     3m37s                  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m22s (x2 over 5m32s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     19s (x2 over 3m37s)    kubelet            Error: ErrImagePull
	  Warning  Failed     19s                    kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    7s (x2 over 3m37s)     kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     7s (x2 over 3m37s)     kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-9bzpm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960153/192.168.39.210
	Start Time:       Mon, 29 Sep 2025 10:37:14 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ds57p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ds57p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m41s                default-scheduler  Successfully assigned default/mysql-5bb876957f-9bzpm to functional-960153
	  Warning  Failed     5m10s                kubelet            Failed to pull image "docker.io/mysql:5.7": copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     95s (x2 over 5m10s)  kubelet            Error: ErrImagePull
	  Warning  Failed     95s                  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    84s (x2 over 5m9s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     84s (x2 over 5m9s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    72s (x3 over 5m41s)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960153/192.168.39.210
	Start Time:       Mon, 29 Sep 2025 10:37:22 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5jh4w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-5jh4w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m34s                default-scheduler  Successfully assigned default/sp-pod to functional-960153
	  Warning  Failed     4m8s                 kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     64s (x2 over 4m8s)   kubelet            Error: ErrImagePull
	  Warning  Failed     64s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    53s (x2 over 4m7s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     53s (x2 over 4m7s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    39s (x3 over 5m34s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-hbwbt" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vfnm6" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-960153 describe pod busybox-mount hello-node-connect-7d85dfc575-rbtgs mysql-5bb876957f-9bzpm sp-pod dashboard-metrics-scraper-77bf4d6c4c-hbwbt kubernetes-dashboard-855c9754f9-vfnm6: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-960153 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-960153 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-rbtgs" [76bcc9f3-165d-4de2-a963-90eb71d2cdfa] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E0929 10:37:25.042835    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:37:25.049238    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:37:25.060649    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:37:25.082041    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:37:25.123449    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:37:25.204909    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:37:25.366482    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:37:25.688005    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:37:26.330015    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:37:27.612086    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:37:30.173936    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:37:35.295689    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:37:45.537965    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-960153 -n functional-960153
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-29 10:47:24.108734857 +0000 UTC m=+1668.174810634
functional_test.go:1645: (dbg) Run:  kubectl --context functional-960153 describe po hello-node-connect-7d85dfc575-rbtgs -n default
functional_test.go:1645: (dbg) kubectl --context functional-960153 describe po hello-node-connect-7d85dfc575-rbtgs -n default:
Name:             hello-node-connect-7d85dfc575-rbtgs
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-960153/192.168.39.210
Start Time:       Mon, 29 Sep 2025 10:37:23 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zd4fw (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-zd4fw:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-rbtgs to functional-960153
Warning  Failed     8m5s                 kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     58s (x3 over 8m5s)   kubelet            Error: ErrImagePull
Warning  Failed     58s (x2 over 4m47s)  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    22s (x5 over 8m5s)   kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     22s (x5 over 8m5s)   kubelet            Error: ImagePullBackOff
Normal   Pulling    10s (x4 over 10m)    kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-960153 logs hello-node-connect-7d85dfc575-rbtgs -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-960153 logs hello-node-connect-7d85dfc575-rbtgs -n default: exit status 1 (61.75633ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-rbtgs" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-960153 logs hello-node-connect-7d85dfc575-rbtgs -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-960153 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-rbtgs
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-960153/192.168.39.210
Start Time:       Mon, 29 Sep 2025 10:37:23 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zd4fw (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-zd4fw:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-rbtgs to functional-960153
Warning  Failed     8m5s                 kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     58s (x3 over 8m5s)   kubelet            Error: ErrImagePull
Warning  Failed     58s (x2 over 4m47s)  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    22s (x5 over 8m5s)   kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     22s (x5 over 8m5s)   kubelet            Error: ImagePullBackOff
Normal   Pulling    10s (x4 over 10m)    kubelet            Pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-960153 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-960153 logs -l app=hello-node-connect: exit status 1 (65.054459ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-rbtgs" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-960153 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-960153 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.209.15
IPs:                      10.102.209.15
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30899/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-960153 -n functional-960153
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 logs -n 25
E0929 10:47:25.042829    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-960153 logs -n 25: (1.491491717s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-960153 ssh sudo umount -f /mount-9p                                                                                    │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh            │ functional-960153 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ mount          │ -p functional-960153 /tmp/TestFunctionalparallelMountCmdspecific-port2145470094/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ ssh            │ functional-960153 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh            │ functional-960153 ssh -- ls -la /mount-9p                                                                                         │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh            │ functional-960153 ssh sudo umount -f /mount-9p                                                                                    │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ mount          │ -p functional-960153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3951226267/001:/mount1 --alsologtostderr -v=1                │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ mount          │ -p functional-960153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3951226267/001:/mount3 --alsologtostderr -v=1                │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ ssh            │ functional-960153 ssh findmnt -T /mount1                                                                                          │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ mount          │ -p functional-960153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3951226267/001:/mount2 --alsologtostderr -v=1                │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ ssh            │ functional-960153 ssh findmnt -T /mount1                                                                                          │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh            │ functional-960153 ssh findmnt -T /mount2                                                                                          │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh            │ functional-960153 ssh findmnt -T /mount3                                                                                          │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ mount          │ -p functional-960153 --kill=true                                                                                                  │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-960153 --alsologtostderr -v=1                                                                    │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ update-context │ functional-960153 update-context --alsologtostderr -v=2                                                                           │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:43 UTC │ 29 Sep 25 10:43 UTC │
	│ update-context │ functional-960153 update-context --alsologtostderr -v=2                                                                           │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:43 UTC │ 29 Sep 25 10:43 UTC │
	│ update-context │ functional-960153 update-context --alsologtostderr -v=2                                                                           │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:43 UTC │ 29 Sep 25 10:43 UTC │
	│ image          │ functional-960153 image ls --format short --alsologtostderr                                                                       │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:43 UTC │ 29 Sep 25 10:43 UTC │
	│ image          │ functional-960153 image ls --format yaml --alsologtostderr                                                                        │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:43 UTC │ 29 Sep 25 10:43 UTC │
	│ ssh            │ functional-960153 ssh pgrep buildkitd                                                                                             │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:43 UTC │                     │
	│ image          │ functional-960153 image build -t localhost/my-image:functional-960153 testdata/build --alsologtostderr                            │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:43 UTC │ 29 Sep 25 10:43 UTC │
	│ image          │ functional-960153 image ls                                                                                                        │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:43 UTC │ 29 Sep 25 10:43 UTC │
	│ image          │ functional-960153 image ls --format json --alsologtostderr                                                                        │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:43 UTC │ 29 Sep 25 10:43 UTC │
	│ image          │ functional-960153 image ls --format table --alsologtostderr                                                                       │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:43 UTC │ 29 Sep 25 10:43 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:37:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:37:23.628690   20273 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:37:23.628794   20273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:37:23.628800   20273 out.go:374] Setting ErrFile to fd 2...
	I0929 10:37:23.628807   20273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:37:23.629009   20273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
	I0929 10:37:23.629457   20273 out.go:368] Setting JSON to false
	I0929 10:37:23.630479   20273 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1189,"bootTime":1759141055,"procs":255,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:37:23.630563   20273 start.go:140] virtualization: kvm guest
	I0929 10:37:23.632590   20273 out.go:179] * [functional-960153] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:37:23.633856   20273 notify.go:220] Checking for updates...
	I0929 10:37:23.633923   20273 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 10:37:23.635308   20273 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:37:23.636756   20273 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 10:37:23.638012   20273 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 10:37:23.639149   20273 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:37:23.640480   20273 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:37:23.642081   20273 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:37:23.642490   20273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:37:23.642535   20273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:37:23.655561   20273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35897
	I0929 10:37:23.656012   20273 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:37:23.656519   20273 main.go:141] libmachine: Using API Version  1
	I0929 10:37:23.656539   20273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:37:23.656902   20273 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:37:23.657084   20273 main.go:141] libmachine: (functional-960153) Calling .DriverName
	I0929 10:37:23.657386   20273 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:37:23.657733   20273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:37:23.657770   20273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:37:23.671036   20273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
	I0929 10:37:23.671450   20273 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:37:23.671847   20273 main.go:141] libmachine: Using API Version  1
	I0929 10:37:23.671862   20273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:37:23.672189   20273 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:37:23.672387   20273 main.go:141] libmachine: (functional-960153) Calling .DriverName
	I0929 10:37:23.703728   20273 out.go:179] * Using the kvm2 driver based on existing profile
	I0929 10:37:23.705004   20273 start.go:304] selected driver: kvm2
	I0929 10:37:23.705017   20273 start.go:924] validating driver "kvm2" against &{Name:functional-960153 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:functional-960153 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:37:23.705119   20273 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:37:23.706327   20273 cni.go:84] Creating CNI manager for ""
	I0929 10:37:23.706410   20273 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 10:37:23.706467   20273 start.go:348] cluster config:
	{Name:functional-960153 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-960153 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:37:23.707821   20273 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.145134060Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759142845145111603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201237,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d5fb423-9393-4126-8007-b7ae80789cd4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.145936467Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87210c39-bd4f-473a-8580-79e35e0345e9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.146151040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87210c39-bd4f-473a-8580-79e35e0345e9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.147007570Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bce2ed56f12be0eae070eadfc38e29f357517cad2fd6165ab487c6120688d9b,PodSandboxId:b441e09c6ef2d85697a1766bd612b4bc9f280229f01444a0a2bf5bce9cb85d1a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759142267674479435,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 818a1168-13eb-40e5-a11e-ed073c8ca85f,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cded293cdc57ead1f28a98da4250d1baf57a2d59a9a93f1d3ee2372dd051ef9b,PodSandboxId:edaa6178cac15b67b074c8c50398d9aad7f133dc6c25e535430bd7a0ce288991,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759142213671533532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c35b0ae21a19f783260f4ba368b53beb2fca7d75595d46d397886bc1018a11,PodSandboxId:6aaf8d34752c12e0e82b95635ab96099dccecef966383a32e03cc2511abd751b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759142213393501916,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2c5f49c9d29c0f3ad3c29c93e2fa675c3c78f618b93494189ca0e15d4171ad6,PodSandboxId:e090db4eef6fe986ce3cca0412b997b8f49d951f4538cae30223710ed8bb293b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759142213363502977,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4deb47b3c02873f7ea4b7a1d04550ee0b7d35c8ea854d454513b6e2cbf954c75,PodSandboxId:d03cfb836c6eb825dfd36aeff2559674feffef4d71647a4fa5e40841f7caa6d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759142208644604932,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012a907cd467a90f54ee8123eaaa32be,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3e6e3f4b5ff111dd1a8ac7df7c60d120ddc205a5b69aeeb209e487a8e405bf,PodSandboxId:4712c91e647ceaf8f356de2bbf7458284f050c978c523cfc8ad352aa21e1d4f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c766
0634,State:CONTAINER_RUNNING,CreatedAt:1759142208594419867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6959f01174e974ade40c3bfc16a814dfb166bdc4f86d4036198ad04d3c51951b,PodSandboxId:8bb4cbce3d4d8ab85fb40f35ec5dc3953224be17b5a81fa59525219e48857513,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759142208583538318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787caf5fb5ad1e85135ce6b6eed843c8946cc916e1dea741a37bf72c666360ad,PodSandboxId:28f527c20559fbf462b7e6f663362919ff57950165ff0336c8ad8d31761fb58f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,}
,Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759142208563010293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a55ba8fa0366e66f40d41eee3f65187820205a686f93eba5c1898309806407,PodSandboxId:4c4858d2471eff7566113f9c7c7352ad8f4b
ff95ac40341dc662526bed7fe51f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759142168428147313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0b6a8579d1174446415645dfdbe88cb1e73c10668c2e2916710fdd235bbffc,PodSandboxId:a573144cc0c0bc839091045888712713e75cb8309f6d89841
454c97d272220e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759142164814577260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4db117533d3015e6a9220366ff1ceefe1f010bcdf2f8570c4f
92873db68b73cd,PodSandboxId:9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759142164800803103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed5dd4c866f42be3e3de5673948bc41285cc1aa68080ca19b9cc4db61be112a,PodSandboxId:d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759142164772706634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683119da9f16b7dcd89cbe4f5b4cb2d5be00d01afd630abc745ea2e4a5909caa,PodSandboxId:a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759142160978984238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0581d84242ded174512127eb8e83baa3cfc5507e63f9a35e26d78ee58e66d0,PodSandboxId:e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759142160182540582,Lab
els:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87210c39-bd4f-473a-8580-79e35e0345e9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.190386666Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f22e07e-371e-4c6b-ae4d-ccf1c866348e name=/runtime.v1.RuntimeService/Version
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.190476365Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f22e07e-371e-4c6b-ae4d-ccf1c866348e name=/runtime.v1.RuntimeService/Version
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.192588718Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aeaccc6a-2f6e-46d0-9d88-7d0af218be32 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.193674851Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759142845193649419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201237,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aeaccc6a-2f6e-46d0-9d88-7d0af218be32 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.194434263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c9478da-a126-4c93-9853-294217831da6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.194505893Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c9478da-a126-4c93-9853-294217831da6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.194994497Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bce2ed56f12be0eae070eadfc38e29f357517cad2fd6165ab487c6120688d9b,PodSandboxId:b441e09c6ef2d85697a1766bd612b4bc9f280229f01444a0a2bf5bce9cb85d1a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759142267674479435,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 818a1168-13eb-40e5-a11e-ed073c8ca85f,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cded293cdc57ead1f28a98da4250d1baf57a2d59a9a93f1d3ee2372dd051ef9b,PodSandboxId:edaa6178cac15b67b074c8c50398d9aad7f133dc6c25e535430bd7a0ce288991,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759142213671533532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c35b0ae21a19f783260f4ba368b53beb2fca7d75595d46d397886bc1018a11,PodSandboxId:6aaf8d34752c12e0e82b95635ab96099dccecef966383a32e03cc2511abd751b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759142213393501916,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2c5f49c9d29c0f3ad3c29c93e2fa675c3c78f618b93494189ca0e15d4171ad6,PodSandboxId:e090db4eef6fe986ce3cca0412b997b8f49d951f4538cae30223710ed8bb293b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759142213363502977,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4deb47b3c02873f7ea4b7a1d04550ee0b7d35c8ea854d454513b6e2cbf954c75,PodSandboxId:d03cfb836c6eb825dfd36aeff2559674feffef4d71647a4fa5e40841f7caa6d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759142208644604932,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012a907cd467a90f54ee8123eaaa32be,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3e6e3f4b5ff111dd1a8ac7df7c60d120ddc205a5b69aeeb209e487a8e405bf,PodSandboxId:4712c91e647ceaf8f356de2bbf7458284f050c978c523cfc8ad352aa21e1d4f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c766
0634,State:CONTAINER_RUNNING,CreatedAt:1759142208594419867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6959f01174e974ade40c3bfc16a814dfb166bdc4f86d4036198ad04d3c51951b,PodSandboxId:8bb4cbce3d4d8ab85fb40f35ec5dc3953224be17b5a81fa59525219e48857513,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759142208583538318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787caf5fb5ad1e85135ce6b6eed843c8946cc916e1dea741a37bf72c666360ad,PodSandboxId:28f527c20559fbf462b7e6f663362919ff57950165ff0336c8ad8d31761fb58f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,}
,Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759142208563010293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a55ba8fa0366e66f40d41eee3f65187820205a686f93eba5c1898309806407,PodSandboxId:4c4858d2471eff7566113f9c7c7352ad8f4b
ff95ac40341dc662526bed7fe51f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759142168428147313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0b6a8579d1174446415645dfdbe88cb1e73c10668c2e2916710fdd235bbffc,PodSandboxId:a573144cc0c0bc839091045888712713e75cb8309f6d89841
454c97d272220e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759142164814577260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4db117533d3015e6a9220366ff1ceefe1f010bcdf2f8570c4f
92873db68b73cd,PodSandboxId:9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759142164800803103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed5dd4c866f42be3e3de5673948bc41285cc1aa68080ca19b9cc4db61be112a,PodSandboxId:d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759142164772706634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683119da9f16b7dcd89cbe4f5b4cb2d5be00d01afd630abc745ea2e4a5909caa,PodSandboxId:a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759142160978984238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0581d84242ded174512127eb8e83baa3cfc5507e63f9a35e26d78ee58e66d0,PodSandboxId:e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759142160182540582,Lab
els:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c9478da-a126-4c93-9853-294217831da6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.231496890Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9071e10c-eab2-42d8-97a3-381eccfad118 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.231753344Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9071e10c-eab2-42d8-97a3-381eccfad118 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.233719958Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=28df43fa-43db-457c-b6b2-b7431935ed0a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.234426786Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759142845234404149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201237,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=28df43fa-43db-457c-b6b2-b7431935ed0a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.235866319Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7f71528-3c73-4142-bded-217622303bbe name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.236324955Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7f71528-3c73-4142-bded-217622303bbe name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.236744570Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bce2ed56f12be0eae070eadfc38e29f357517cad2fd6165ab487c6120688d9b,PodSandboxId:b441e09c6ef2d85697a1766bd612b4bc9f280229f01444a0a2bf5bce9cb85d1a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759142267674479435,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 818a1168-13eb-40e5-a11e-ed073c8ca85f,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cded293cdc57ead1f28a98da4250d1baf57a2d59a9a93f1d3ee2372dd051ef9b,PodSandboxId:edaa6178cac15b67b074c8c50398d9aad7f133dc6c25e535430bd7a0ce288991,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759142213671533532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c35b0ae21a19f783260f4ba368b53beb2fca7d75595d46d397886bc1018a11,PodSandboxId:6aaf8d34752c12e0e82b95635ab96099dccecef966383a32e03cc2511abd751b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759142213393501916,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2c5f49c9d29c0f3ad3c29c93e2fa675c3c78f618b93494189ca0e15d4171ad6,PodSandboxId:e090db4eef6fe986ce3cca0412b997b8f49d951f4538cae30223710ed8bb293b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759142213363502977,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4deb47b3c02873f7ea4b7a1d04550ee0b7d35c8ea854d454513b6e2cbf954c75,PodSandboxId:d03cfb836c6eb825dfd36aeff2559674feffef4d71647a4fa5e40841f7caa6d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759142208644604932,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012a907cd467a90f54ee8123eaaa32be,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3e6e3f4b5ff111dd1a8ac7df7c60d120ddc205a5b69aeeb209e487a8e405bf,PodSandboxId:4712c91e647ceaf8f356de2bbf7458284f050c978c523cfc8ad352aa21e1d4f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c766
0634,State:CONTAINER_RUNNING,CreatedAt:1759142208594419867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6959f01174e974ade40c3bfc16a814dfb166bdc4f86d4036198ad04d3c51951b,PodSandboxId:8bb4cbce3d4d8ab85fb40f35ec5dc3953224be17b5a81fa59525219e48857513,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759142208583538318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787caf5fb5ad1e85135ce6b6eed843c8946cc916e1dea741a37bf72c666360ad,PodSandboxId:28f527c20559fbf462b7e6f663362919ff57950165ff0336c8ad8d31761fb58f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,}
,Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759142208563010293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a55ba8fa0366e66f40d41eee3f65187820205a686f93eba5c1898309806407,PodSandboxId:4c4858d2471eff7566113f9c7c7352ad8f4b
ff95ac40341dc662526bed7fe51f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759142168428147313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0b6a8579d1174446415645dfdbe88cb1e73c10668c2e2916710fdd235bbffc,PodSandboxId:a573144cc0c0bc839091045888712713e75cb8309f6d89841
454c97d272220e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759142164814577260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4db117533d3015e6a9220366ff1ceefe1f010bcdf2f8570c4f
92873db68b73cd,PodSandboxId:9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759142164800803103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed5dd4c866f42be3e3de5673948bc41285cc1aa68080ca19b9cc4db61be112a,PodSandboxId:d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759142164772706634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683119da9f16b7dcd89cbe4f5b4cb2d5be00d01afd630abc745ea2e4a5909caa,PodSandboxId:a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759142160978984238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0581d84242ded174512127eb8e83baa3cfc5507e63f9a35e26d78ee58e66d0,PodSandboxId:e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759142160182540582,Lab
els:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c7f71528-3c73-4142-bded-217622303bbe name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.274967685Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ab08240e-af62-4c52-ba65-f9173585ad47 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.275065419Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab08240e-af62-4c52-ba65-f9173585ad47 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.276989964Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb825b19-210c-432c-8657-dcd7980afbe8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.279018886Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759142845278927625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201237,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb825b19-210c-432c-8657-dcd7980afbe8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.282499368Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83fda7b0-9d3d-4182-b356-db1195ac5532 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.282732575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83fda7b0-9d3d-4182-b356-db1195ac5532 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:25 functional-960153 crio[5468]: time="2025-09-29 10:47:25.283795543Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bce2ed56f12be0eae070eadfc38e29f357517cad2fd6165ab487c6120688d9b,PodSandboxId:b441e09c6ef2d85697a1766bd612b4bc9f280229f01444a0a2bf5bce9cb85d1a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759142267674479435,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 818a1168-13eb-40e5-a11e-ed073c8ca85f,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cded293cdc57ead1f28a98da4250d1baf57a2d59a9a93f1d3ee2372dd051ef9b,PodSandboxId:edaa6178cac15b67b074c8c50398d9aad7f133dc6c25e535430bd7a0ce288991,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759142213671533532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c35b0ae21a19f783260f4ba368b53beb2fca7d75595d46d397886bc1018a11,PodSandboxId:6aaf8d34752c12e0e82b95635ab96099dccecef966383a32e03cc2511abd751b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759142213393501916,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2c5f49c9d29c0f3ad3c29c93e2fa675c3c78f618b93494189ca0e15d4171ad6,PodSandboxId:e090db4eef6fe986ce3cca0412b997b8f49d951f4538cae30223710ed8bb293b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759142213363502977,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4deb47b3c02873f7ea4b7a1d04550ee0b7d35c8ea854d454513b6e2cbf954c75,PodSandboxId:d03cfb836c6eb825dfd36aeff2559674feffef4d71647a4fa5e40841f7caa6d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759142208644604932,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012a907cd467a90f54ee8123eaaa32be,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3e6e3f4b5ff111dd1a8ac7df7c60d120ddc205a5b69aeeb209e487a8e405bf,PodSandboxId:4712c91e647ceaf8f356de2bbf7458284f050c978c523cfc8ad352aa21e1d4f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c766
0634,State:CONTAINER_RUNNING,CreatedAt:1759142208594419867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6959f01174e974ade40c3bfc16a814dfb166bdc4f86d4036198ad04d3c51951b,PodSandboxId:8bb4cbce3d4d8ab85fb40f35ec5dc3953224be17b5a81fa59525219e48857513,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759142208583538318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787caf5fb5ad1e85135ce6b6eed843c8946cc916e1dea741a37bf72c666360ad,PodSandboxId:28f527c20559fbf462b7e6f663362919ff57950165ff0336c8ad8d31761fb58f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,}
,Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759142208563010293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a55ba8fa0366e66f40d41eee3f65187820205a686f93eba5c1898309806407,PodSandboxId:4c4858d2471eff7566113f9c7c7352ad8f4b
ff95ac40341dc662526bed7fe51f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759142168428147313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0b6a8579d1174446415645dfdbe88cb1e73c10668c2e2916710fdd235bbffc,PodSandboxId:a573144cc0c0bc839091045888712713e75cb8309f6d89841
454c97d272220e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759142164814577260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4db117533d3015e6a9220366ff1ceefe1f010bcdf2f8570c4f
92873db68b73cd,PodSandboxId:9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759142164800803103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed5dd4c866f42be3e3de5673948bc41285cc1aa68080ca19b9cc4db61be112a,PodSandboxId:d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759142164772706634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683119da9f16b7dcd89cbe4f5b4cb2d5be00d01afd630abc745ea2e4a5909caa,PodSandboxId:a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759142160978984238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0581d84242ded174512127eb8e83baa3cfc5507e63f9a35e26d78ee58e66d0,PodSandboxId:e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759142160182540582,Lab
els:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83fda7b0-9d3d-4182-b356-db1195ac5532 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2bce2ed56f12b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Exited              mount-munger              0                   b441e09c6ef2d       busybox-mount
	cded293cdc57e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 minutes ago      Running             coredns                   2                   edaa6178cac15       coredns-66bc5c9577-ldskd
	32c35b0ae21a1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       3                   6aaf8d34752c1       storage-provisioner
	b2c5f49c9d29c       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      10 minutes ago      Running             kube-proxy                2                   e090db4eef6fe       kube-proxy-wmdfj
	4deb47b3c0287       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      10 minutes ago      Running             kube-apiserver            0                   d03cfb836c6eb       kube-apiserver-functional-960153
	5e3e6e3f4b5ff       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      10 minutes ago      Running             kube-controller-manager   3                   4712c91e647ce       kube-controller-manager-functional-960153
	6959f01174e97       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      10 minutes ago      Running             kube-scheduler            3                   8bb4cbce3d4d8       kube-scheduler-functional-960153
	787caf5fb5ad1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      3                   28f527c20559f       etcd-functional-960153
	c8a55ba8fa036       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Exited              storage-provisioner       2                   4c4858d2471ef       storage-provisioner
	1b0b6a8579d11       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago      Exited              etcd                      2                   a573144cc0c0b       etcd-functional-960153
	4db117533d301       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      11 minutes ago      Exited              kube-scheduler            2                   9c20e6f953a18       kube-scheduler-functional-960153
	1ed5dd4c866f4       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      11 minutes ago      Exited              kube-controller-manager   2                   d6b292ebe7e92       kube-controller-manager-functional-960153
	683119da9f16b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   1                   a203d4614c54e       coredns-66bc5c9577-ldskd
	2d0581d84242d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      11 minutes ago      Exited              kube-proxy                1                   e55c0fbe79eb9       kube-proxy-wmdfj
	
	
	==> coredns [683119da9f16b7dcd89cbe4f5b4cb2d5be00d01afd630abc745ea2e4a5909caa] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57545 - 19273 "HINFO IN 5553420383368812737.2946601077225657136. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.47213075s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cded293cdc57ead1f28a98da4250d1baf57a2d59a9a93f1d3ee2372dd051ef9b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59083 - 18141 "HINFO IN 5463811549496456981.4073937826615656044. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063852137s
	
	
	==> describe nodes <==
	Name:               functional-960153
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-960153
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c703192fb7638284bed1945941837d6f5d9e8170
	                    minikube.k8s.io/name=functional-960153
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_35_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:35:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-960153
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 10:47:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 10:47:05 +0000   Mon, 29 Sep 2025 10:35:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 10:47:05 +0000   Mon, 29 Sep 2025 10:35:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 10:47:05 +0000   Mon, 29 Sep 2025 10:35:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 10:47:05 +0000   Mon, 29 Sep 2025 10:35:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    functional-960153
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 9f88164a3a16454d87ec4803e7696424
	  System UUID:                9f88164a-3a16-454d-87ec-4803e7696424
	  Boot ID:                    52ac99b4-d685-43b7-aae7-7d644d51c516
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-6pbhb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  default                     hello-node-connect-7d85dfc575-rbtgs           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-9bzpm                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-ldskd                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-960153                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-960153              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-960153     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-wmdfj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-960153              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-hbwbt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m30s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vfnm6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-960153 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-960153 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-960153 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeReady                12m                kubelet          Node functional-960153 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-960153 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-960153 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-960153 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node functional-960153 event: Registered Node functional-960153 in Controller
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-960153 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-960153 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-960153 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-960153 event: Registered Node functional-960153 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-960153 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-960153 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-960153 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-960153 event: Registered Node functional-960153 in Controller
	
	
	==> dmesg <==
	[  +0.009859] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.190779] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085188] kauditd_printk_skb: 1 callbacks suppressed
	[Sep29 10:35] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.138189] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.494668] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.966317] kauditd_printk_skb: 249 callbacks suppressed
	[ +20.548723] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.109948] kauditd_printk_skb: 11 callbacks suppressed
	[Sep29 10:36] kauditd_printk_skb: 337 callbacks suppressed
	[  +0.739809] kauditd_printk_skb: 93 callbacks suppressed
	[ +14.865249] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.108609] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.992645] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.562406] kauditd_printk_skb: 164 callbacks suppressed
	[Sep29 10:37] kauditd_printk_skb: 133 callbacks suppressed
	[  +2.047433] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.000167] kauditd_printk_skb: 68 callbacks suppressed
	[ +23.144420] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.146482] kauditd_printk_skb: 31 callbacks suppressed
	[Sep29 10:39] kauditd_printk_skb: 74 callbacks suppressed
	[Sep29 10:43] crun[9502]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +2.200543] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [1b0b6a8579d1174446415645dfdbe88cb1e73c10668c2e2916710fdd235bbffc] <==
	{"level":"warn","ts":"2025-09-29T10:36:07.224736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.233985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.242525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.254684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.269603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.279823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.391154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51468","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:36:33.464966Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T10:36:33.465048Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-960153","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.210:2380"],"advertise-client-urls":["https://192.168.39.210:2379"]}
	{"level":"error","ts":"2025-09-29T10:36:33.465141Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T10:36:33.541896Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T10:36:33.543548Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:36:33.543608Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"5a5dd032def1271d","current-leader-member-id":"5a5dd032def1271d"}
	{"level":"info","ts":"2025-09-29T10:36:33.543700Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-29T10:36:33.543743Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-29T10:36:33.543884Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T10:36:33.543974Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T10:36:33.543986Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T10:36:33.544026Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.210:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T10:36:33.544033Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.210:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T10:36:33.544039Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.210:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:36:33.547342Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.210:2380"}
	{"level":"error","ts":"2025-09-29T10:36:33.547405Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.210:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:36:33.547444Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.210:2380"}
	{"level":"info","ts":"2025-09-29T10:36:33.547452Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-960153","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.210:2380"],"advertise-client-urls":["https://192.168.39.210:2379"]}
	
	
	==> etcd [787caf5fb5ad1e85135ce6b6eed843c8946cc916e1dea741a37bf72c666360ad] <==
	{"level":"warn","ts":"2025-09-29T10:36:50.744043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.746453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.770274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.776634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.791479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.817968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.853822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.865483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.882955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.914507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.928089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.946754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.956789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.965927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.981978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.003127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.017773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.030414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.050869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.065427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.095802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.194410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60022","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:46:49.818915Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1064}
	{"level":"info","ts":"2025-09-29T10:46:49.850023Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1064,"took":"30.705306ms","hash":953322333,"current-db-size-bytes":3383296,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1536000,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-09-29T10:46:49.850067Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":953322333,"revision":1064,"compact-revision":-1}
	
	
	==> kernel <==
	 10:47:25 up 12 min,  0 users,  load average: 0.45, 0.30, 0.23
	Linux functional-960153 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4deb47b3c02873f7ea4b7a1d04550ee0b7d35c8ea854d454513b6e2cbf954c75] <==
	I0929 10:37:09.576805       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.227.93"}
	I0929 10:37:14.540431       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.250.229"}
	I0929 10:37:14.609964       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 10:37:23.840263       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.209.15"}
	I0929 10:37:54.198010       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:37:55.134922       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 10:37:55.418687       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.170.200"}
	I0929 10:37:55.448292       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.136.103"}
	I0929 10:38:04.450387       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:39:02.514133       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:39:06.401949       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:40:12.302143       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:40:34.124990       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:41:39.596788       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:41:40.694238       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:42:43.038044       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:42:49.727735       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:42:56.719384       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.130.23"}
	I0929 10:43:52.326619       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:43:52.867817       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:45:00.097752       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:45:17.561940       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:46:24.714344       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:46:26.911806       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:46:51.863467       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [1ed5dd4c866f42be3e3de5673948bc41285cc1aa68080ca19b9cc4db61be112a] <==
	I0929 10:36:11.372935       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 10:36:11.374520       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 10:36:11.378813       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 10:36:11.379357       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 10:36:11.383757       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 10:36:11.387079       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 10:36:11.390356       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 10:36:11.391517       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 10:36:11.393753       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 10:36:11.393858       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 10:36:11.394918       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:36:11.399254       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 10:36:11.399519       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 10:36:11.400457       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 10:36:11.403754       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 10:36:11.412038       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 10:36:11.412062       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 10:36:11.422408       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 10:36:11.422447       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 10:36:11.422640       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:36:11.422717       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 10:36:11.422741       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 10:36:11.422896       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 10:36:11.424512       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:36:11.425024       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	
	
	==> kube-controller-manager [5e3e6e3f4b5ff111dd1a8ac7df7c60d120ddc205a5b69aeeb209e487a8e405bf] <==
	I0929 10:36:55.304581       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 10:36:55.304597       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 10:36:55.306874       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 10:36:55.317098       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 10:36:55.317620       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 10:36:55.317734       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 10:36:55.318399       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 10:36:55.323169       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 10:36:55.323288       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 10:36:55.325615       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:36:55.325627       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 10:36:55.325633       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 10:36:55.326568       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 10:36:55.326655       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 10:36:55.326762       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-960153"
	I0929 10:36:55.326797       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 10:36:55.327156       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 10:36:55.331436       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	E0929 10:37:55.227948       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.251462       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.257857       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.265371       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.270669       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.275541       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.280040       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [2d0581d84242ded174512127eb8e83baa3cfc5507e63f9a35e26d78ee58e66d0] <==
	E0929 10:36:04.895610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-960153&limit=500&resourceVersion=0\": dial tcp 192.168.39.210:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0929 10:36:10.013346       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:36:10.013415       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.210"]
	E0929 10:36:10.013474       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:36:10.087391       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 10:36:10.087968       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 10:36:10.087999       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:36:10.117148       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:36:10.117840       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:36:10.117855       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:10.128465       1 config.go:200] "Starting service config controller"
	I0929 10:36:10.128494       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:36:10.128511       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:36:10.128515       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:36:10.128524       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:36:10.128526       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:36:10.128861       1 config.go:309] "Starting node config controller"
	I0929 10:36:10.136281       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:36:10.136290       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:36:10.229025       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:36:10.229073       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:36:10.229106       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b2c5f49c9d29c0f3ad3c29c93e2fa675c3c78f618b93494189ca0e15d4171ad6] <==
	I0929 10:36:53.971435       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:36:54.074882       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:36:54.078310       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.210"]
	E0929 10:36:54.082955       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:36:54.232412       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 10:36:54.232522       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 10:36:54.232545       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:36:54.300002       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:36:54.300332       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:36:54.300345       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:54.309617       1 config.go:200] "Starting service config controller"
	I0929 10:36:54.309844       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:36:54.310029       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:36:54.310175       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:36:54.310307       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:36:54.310396       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:36:54.322063       1 config.go:309] "Starting node config controller"
	I0929 10:36:54.358289       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:36:54.358327       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:36:54.411926       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:36:54.411968       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:36:54.412002       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4db117533d3015e6a9220366ff1ceefe1f010bcdf2f8570c4f92873db68b73cd] <==
	I0929 10:36:06.923917       1 serving.go:386] Generated self-signed cert in-memory
	I0929 10:36:08.191074       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 10:36:08.191117       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:08.214607       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0929 10:36:08.214735       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 10:36:08.214795       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:08.214817       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:08.214840       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:08.214855       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:08.216911       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 10:36:08.217318       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 10:36:08.316254       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:08.316542       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:08.317687       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0929 10:36:33.483990       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 10:36:33.488911       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 10:36:33.488954       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:33.488971       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:33.495440       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 10:36:33.495471       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 10:36:33.495508       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [6959f01174e974ade40c3bfc16a814dfb166bdc4f86d4036198ad04d3c51951b] <==
	I0929 10:36:49.551846       1 serving.go:386] Generated self-signed cert in-memory
	I0929 10:36:51.969129       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 10:36:51.969173       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:51.983643       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 10:36:51.983753       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0929 10:36:51.983782       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 10:36:51.983816       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 10:36:51.990405       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:51.990446       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:51.990462       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:51.990468       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:52.083915       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0929 10:36:52.090742       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:52.090863       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 10:46:37 functional-960153 kubelet[5808]: E0929 10:46:37.675155    5808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-rbtgs" podUID="76bcc9f3-165d-4de2-a963-90eb71d2cdfa"
	Sep 29 10:46:37 functional-960153 kubelet[5808]: E0929 10:46:37.950333    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142797949850864  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Sep 29 10:46:37 functional-960153 kubelet[5808]: E0929 10:46:37.950358    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142797949850864  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Sep 29 10:46:47 functional-960153 kubelet[5808]: E0929 10:46:47.772174    5808 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod3eca0381-2478-4fd7-8b49-076c58cca999/crio-e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622: Error finding container e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622: Status 404 returned error can't find the container with id e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622
	Sep 29 10:46:47 functional-960153 kubelet[5808]: E0929 10:46:47.772880    5808 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod0c124297-4905-4a35-9473-4bd1b565e373/crio-a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba: Error finding container a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba: Status 404 returned error can't find the container with id a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba
	Sep 29 10:46:47 functional-960153 kubelet[5808]: E0929 10:46:47.773162    5808 manager.go:1116] Failed to create existing container: /kubepods/burstable/podd1bee20c8d58d621b4427e7252264eba/crio-a573144cc0c0bc839091045888712713e75cb8309f6d89841454c97d272220e0: Error finding container a573144cc0c0bc839091045888712713e75cb8309f6d89841454c97d272220e0: Status 404 returned error can't find the container with id a573144cc0c0bc839091045888712713e75cb8309f6d89841454c97d272220e0
	Sep 29 10:46:47 functional-960153 kubelet[5808]: E0929 10:46:47.773427    5808 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod221fcbdd73ebea579595982187f9964d/crio-9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766: Error finding container 9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766: Status 404 returned error can't find the container with id 9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766
	Sep 29 10:46:47 functional-960153 kubelet[5808]: E0929 10:46:47.773823    5808 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod180156b943983a6e5b8f074dd62185b8/crio-d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360: Error finding container d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360: Status 404 returned error can't find the container with id d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360
	Sep 29 10:46:47 functional-960153 kubelet[5808]: E0929 10:46:47.774031    5808 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod3581457d-4db8-4128-a3eb-f27614ec4c96/crio-4c4858d2471eff7566113f9c7c7352ad8f4bff95ac40341dc662526bed7fe51f: Error finding container 4c4858d2471eff7566113f9c7c7352ad8f4bff95ac40341dc662526bed7fe51f: Status 404 returned error can't find the container with id 4c4858d2471eff7566113f9c7c7352ad8f4bff95ac40341dc662526bed7fe51f
	Sep 29 10:46:47 functional-960153 kubelet[5808]: E0929 10:46:47.951947    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142807951548846  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Sep 29 10:46:47 functional-960153 kubelet[5808]: E0929 10:46:47.951997    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142807951548846  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Sep 29 10:46:49 functional-960153 kubelet[5808]: E0929 10:46:49.674405    5808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-rbtgs" podUID="76bcc9f3-165d-4de2-a963-90eb71d2cdfa"
	Sep 29 10:46:56 functional-960153 kubelet[5808]: E0929 10:46:56.960376    5808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 10:46:56 functional-960153 kubelet[5808]: E0929 10:46:56.960444    5808 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 10:46:56 functional-960153 kubelet[5808]: E0929 10:46:56.960641    5808 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-hbwbt_kubernetes-dashboard(4f05ae5d-538c-490e-a23d-d19f009ffb42): ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 10:46:56 functional-960153 kubelet[5808]: E0929 10:46:56.960671    5808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-hbwbt" podUID="4f05ae5d-538c-490e-a23d-d19f009ffb42"
	Sep 29 10:46:57 functional-960153 kubelet[5808]: E0929 10:46:57.953883    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142817953581954  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Sep 29 10:46:57 functional-960153 kubelet[5808]: E0929 10:46:57.953907    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142817953581954  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Sep 29 10:47:02 functional-960153 kubelet[5808]: E0929 10:47:02.674640    5808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-rbtgs" podUID="76bcc9f3-165d-4de2-a963-90eb71d2cdfa"
	Sep 29 10:47:07 functional-960153 kubelet[5808]: E0929 10:47:07.955805    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142827955473211  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Sep 29 10:47:07 functional-960153 kubelet[5808]: E0929 10:47:07.955828    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142827955473211  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Sep 29 10:47:11 functional-960153 kubelet[5808]: E0929 10:47:11.676724    5808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-hbwbt" podUID="4f05ae5d-538c-490e-a23d-d19f009ffb42"
	Sep 29 10:47:17 functional-960153 kubelet[5808]: E0929 10:47:17.959128    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142837957791720  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Sep 29 10:47:17 functional-960153 kubelet[5808]: E0929 10:47:17.959346    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142837957791720  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Sep 29 10:47:24 functional-960153 kubelet[5808]: E0929 10:47:24.676511    5808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-hbwbt" podUID="4f05ae5d-538c-490e-a23d-d19f009ffb42"
	
	
	==> storage-provisioner [32c35b0ae21a19f783260f4ba368b53beb2fca7d75595d46d397886bc1018a11] <==
	W0929 10:47:00.248642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:02.252996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:02.259402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:04.262653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:04.267912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:06.272409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:06.280864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:08.284737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:08.290760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:10.293824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:10.299151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:12.302598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:12.307168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:14.311395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:14.316527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:16.319594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:16.324959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:18.328637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:18.338828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:20.342812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:20.347702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:22.351833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:22.361039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:24.365269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:24.375053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c8a55ba8fa0366e66f40d41eee3f65187820205a686f93eba5c1898309806407] <==
	I0929 10:36:08.500670       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 10:36:08.509633       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 10:36:08.509683       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0929 10:36:08.512109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:11.968139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:16.228764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:19.827667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:22.881892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:25.906036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:25.912008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 10:36:25.913089       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 10:36:25.913543       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"25412f40-1675-4ca1-a896-dcfa19247807", APIVersion:"v1", ResourceVersion:"538", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-960153_65f4ca3f-4720-4696-9b70-1b21f4e35fd1 became leader
	I0929 10:36:25.913623       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-960153_65f4ca3f-4720-4696-9b70-1b21f4e35fd1!
	W0929 10:36:25.921899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:25.932418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 10:36:26.013863       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-960153_65f4ca3f-4720-4696-9b70-1b21f4e35fd1!
	W0929 10:36:27.936027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:27.941131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:29.945646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:29.952337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:31.955062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:31.959717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-960153 -n functional-960153
helpers_test.go:269: (dbg) Run:  kubectl --context functional-960153 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-6pbhb hello-node-connect-7d85dfc575-rbtgs mysql-5bb876957f-9bzpm sp-pod dashboard-metrics-scraper-77bf4d6c4c-hbwbt kubernetes-dashboard-855c9754f9-vfnm6
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-960153 describe pod busybox-mount hello-node-75c85bcc94-6pbhb hello-node-connect-7d85dfc575-rbtgs mysql-5bb876957f-9bzpm sp-pod dashboard-metrics-scraper-77bf4d6c4c-hbwbt kubernetes-dashboard-855c9754f9-vfnm6
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-960153 describe pod busybox-mount hello-node-75c85bcc94-6pbhb hello-node-connect-7d85dfc575-rbtgs mysql-5bb876957f-9bzpm sp-pod dashboard-metrics-scraper-77bf4d6c4c-hbwbt kubernetes-dashboard-855c9754f9-vfnm6: exit status 1 (102.691552ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960153/192.168.39.210
	Start Time:       Mon, 29 Sep 2025 10:37:17 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://2bce2ed56f12be0eae070eadfc38e29f357517cad2fd6165ab487c6120688d9b
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 10:37:47 +0000
	      Finished:     Mon, 29 Sep 2025 10:37:47 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b7v9g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-b7v9g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  10m    default-scheduler  Successfully assigned default/busybox-mount to functional-960153
	  Normal  Pulling    10m    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m39s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.167s (29.766s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m39s  kubelet            Created container: mount-munger
	  Normal  Started    9m39s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-6pbhb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960153/192.168.39.210
	Start Time:       Mon, 29 Sep 2025 10:42:56 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zd7j6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zd7j6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m29s                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-6pbhb to functional-960153
	  Warning  Failed     105s                 kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     105s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    105s                 kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     105s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    92s (x2 over 4m29s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-rbtgs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960153/192.168.39.210
	Start Time:       Mon, 29 Sep 2025 10:37:23 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zd4fw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zd4fw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-rbtgs to functional-960153
	  Warning  Failed     8m7s                 kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     60s (x3 over 8m7s)   kubelet            Error: ErrImagePull
	  Warning  Failed     60s (x2 over 4m49s)  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    24s (x5 over 8m7s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     24s (x5 over 8m7s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    12s (x4 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-9bzpm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960153/192.168.39.210
	Start Time:       Mon, 29 Sep 2025 10:37:14 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ds57p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ds57p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-9bzpm to functional-960153
	  Warning  Failed     9m40s                  kubelet            Failed to pull image "docker.io/mysql:5.7": copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m5s                   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m47s (x3 over 9m40s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m47s                  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m19s (x4 over 9m39s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m19s (x4 over 9m39s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m6s (x4 over 10m)     kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960153/192.168.39.210
	Start Time:       Mon, 29 Sep 2025 10:37:22 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5jh4w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-5jh4w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/sp-pod to functional-960153
	  Warning  Failed     8m38s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m16s (x3 over 8m38s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m16s (x2 over 5m34s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    98s (x5 over 8m37s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     98s (x5 over 8m37s)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    84s (x4 over 10m)      kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-hbwbt" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vfnm6" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-960153 describe pod busybox-mount hello-node-75c85bcc94-6pbhb hello-node-connect-7d85dfc575-rbtgs mysql-5bb876957f-9bzpm sp-pod dashboard-metrics-scraper-77bf4d6c4c-hbwbt kubernetes-dashboard-855c9754f9-vfnm6: exit status 1
E0929 10:52:25.042891    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.94s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (367.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [3581457d-4db8-4128-a3eb-f27614ec4c96] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004423581s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-960153 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-960153 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-960153 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-960153 apply -f testdata/storage-provisioner/pod.yaml
I0929 10:37:22.012891    7691 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8b7cd5d7-e218-4676-822e-cdd046d78a8d] Pending
helpers_test.go:352: "sp-pod" [8b7cd5d7-e218-4676-822e-cdd046d78a8d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-960153 -n functional-960153
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-29 10:43:22.266587699 +0000 UTC m=+1426.332663479
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-960153 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-960153 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-960153/192.168.39.210
Start Time:       Mon, 29 Sep 2025 10:37:22 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:  10.244.0.9
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5jh4w (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-5jh4w:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/sp-pod to functional-960153
Warning  Failed     4m34s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     90s (x2 over 4m34s)  kubelet            Error: ErrImagePull
Warning  Failed     90s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    79s (x2 over 4m33s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     79s (x2 over 4m33s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    65s (x3 over 6m)     kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-960153 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-960153 logs sp-pod -n default: exit status 1 (71.544626ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-960153 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-960153 -n functional-960153
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-960153 logs -n 25: (1.452457192s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                ARGS                                                                 │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image     │ functional-960153 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr              │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ image     │ functional-960153 image ls                                                                                                          │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ image     │ functional-960153 image save --daemon kicbase/echo-server:functional-960153 --alsologtostderr                                       │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ start     │ -p functional-960153 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ addons    │ functional-960153 addons list                                                                                                       │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ addons    │ functional-960153 addons list -o json                                                                                               │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ start     │ -p functional-960153 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ start     │ -p functional-960153 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false           │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ ssh       │ functional-960153 ssh stat /mount-9p/created-by-test                                                                                │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh       │ functional-960153 ssh stat /mount-9p/created-by-pod                                                                                 │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh       │ functional-960153 ssh sudo umount -f /mount-9p                                                                                      │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh       │ functional-960153 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ mount     │ -p functional-960153 /tmp/TestFunctionalparallelMountCmdspecific-port2145470094/001:/mount-9p --alsologtostderr -v=1 --port 46464   │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ ssh       │ functional-960153 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh       │ functional-960153 ssh -- ls -la /mount-9p                                                                                           │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh       │ functional-960153 ssh sudo umount -f /mount-9p                                                                                      │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ mount     │ -p functional-960153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3951226267/001:/mount1 --alsologtostderr -v=1                  │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ mount     │ -p functional-960153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3951226267/001:/mount3 --alsologtostderr -v=1                  │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ ssh       │ functional-960153 ssh findmnt -T /mount1                                                                                            │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ mount     │ -p functional-960153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3951226267/001:/mount2 --alsologtostderr -v=1                  │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ ssh       │ functional-960153 ssh findmnt -T /mount1                                                                                            │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh       │ functional-960153 ssh findmnt -T /mount2                                                                                            │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh       │ functional-960153 ssh findmnt -T /mount3                                                                                            │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ mount     │ -p functional-960153 --kill=true                                                                                                    │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-960153 --alsologtostderr -v=1                                                                      │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:37:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:37:23.628690   20273 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:37:23.628794   20273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:37:23.628800   20273 out.go:374] Setting ErrFile to fd 2...
	I0929 10:37:23.628807   20273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:37:23.629009   20273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
	I0929 10:37:23.629457   20273 out.go:368] Setting JSON to false
	I0929 10:37:23.630479   20273 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1189,"bootTime":1759141055,"procs":255,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:37:23.630563   20273 start.go:140] virtualization: kvm guest
	I0929 10:37:23.632590   20273 out.go:179] * [functional-960153] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:37:23.633856   20273 notify.go:220] Checking for updates...
	I0929 10:37:23.633923   20273 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 10:37:23.635308   20273 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:37:23.636756   20273 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 10:37:23.638012   20273 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 10:37:23.639149   20273 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:37:23.640480   20273 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:37:23.642081   20273 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:37:23.642490   20273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:37:23.642535   20273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:37:23.655561   20273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35897
	I0929 10:37:23.656012   20273 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:37:23.656519   20273 main.go:141] libmachine: Using API Version  1
	I0929 10:37:23.656539   20273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:37:23.656902   20273 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:37:23.657084   20273 main.go:141] libmachine: (functional-960153) Calling .DriverName
	I0929 10:37:23.657386   20273 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:37:23.657733   20273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:37:23.657770   20273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:37:23.671036   20273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
	I0929 10:37:23.671450   20273 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:37:23.671847   20273 main.go:141] libmachine: Using API Version  1
	I0929 10:37:23.671862   20273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:37:23.672189   20273 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:37:23.672387   20273 main.go:141] libmachine: (functional-960153) Calling .DriverName
	I0929 10:37:23.703728   20273 out.go:179] * Using the kvm2 driver based on existing profile
	I0929 10:37:23.705004   20273 start.go:304] selected driver: kvm2
	I0929 10:37:23.705017   20273 start.go:924] validating driver "kvm2" against &{Name:functional-960153 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:functional-960153 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:37:23.705119   20273 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:37:23.706327   20273 cni.go:84] Creating CNI manager for ""
	I0929 10:37:23.706410   20273 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 10:37:23.706467   20273 start.go:348] cluster config:
	{Name:functional-960153 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-960153 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:37:23.707821   20273 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.091373326Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759142603091349909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d2d1fc6-f3fe-4688-944a-900d660c454b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.092443600Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b9d418c-8d84-4a6f-b6e9-c3d32c7344d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.093402041Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b9d418c-8d84-4a6f-b6e9-c3d32c7344d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.094111912Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bce2ed56f12be0eae070eadfc38e29f357517cad2fd6165ab487c6120688d9b,PodSandboxId:b441e09c6ef2d85697a1766bd612b4bc9f280229f01444a0a2bf5bce9cb85d1a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759142267674479435,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 818a1168-13eb-40e5-a11e-ed073c8ca85f,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cded293cdc57ead1f28a98da4250d1baf57a2d59a9a93f1d3ee2372dd051ef9b,PodSandboxId:edaa6178cac15b67b074c8c50398d9aad7f133dc6c25e535430bd7a0ce288991,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759142213671533532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c35b0ae21a19f783260f4ba368b53beb2fca7d75595d46d397886bc1018a11,PodSandboxId:6aaf8d34752c12e0e82b95635ab96099dccecef966383a32e03cc2511abd751b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759142213393501916,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2c5f49c9d29c0f3ad3c29c93e2fa675c3c78f618b93494189ca0e15d4171ad6,PodSandboxId:e090db4eef6fe986ce3cca0412b997b8f49d951f4538cae30223710ed8bb293b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759142213363502977,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4deb47b3c02873f7ea4b7a1d04550ee0b7d35c8ea854d454513b6e2cbf954c75,PodSandboxId:d03cfb836c6eb825dfd36aeff2559674feffef4d71647a4fa5e40841f7caa6d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759142208644604932,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012a907cd467a90f54ee8123eaaa32be,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3e6e3f4b5ff111dd1a8ac7df7c60d120ddc205a5b69aeeb209e487a8e405bf,PodSandboxId:4712c91e647ceaf8f356de2bbf7458284f050c978c523cfc8ad352aa21e1d4f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c766
0634,State:CONTAINER_RUNNING,CreatedAt:1759142208594419867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6959f01174e974ade40c3bfc16a814dfb166bdc4f86d4036198ad04d3c51951b,PodSandboxId:8bb4cbce3d4d8ab85fb40f35ec5dc3953224be17b5a81fa59525219e48857513,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759142208583538318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787caf5fb5ad1e85135ce6b6eed843c8946cc916e1dea741a37bf72c666360ad,PodSandboxId:28f527c20559fbf462b7e6f663362919ff57950165ff0336c8ad8d31761fb58f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,}
,Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759142208563010293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a55ba8fa0366e66f40d41eee3f65187820205a686f93eba5c1898309806407,PodSandboxId:4c4858d2471eff7566113f9c7c7352ad8f4b
ff95ac40341dc662526bed7fe51f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759142168428147313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0b6a8579d1174446415645dfdbe88cb1e73c10668c2e2916710fdd235bbffc,PodSandboxId:a573144cc0c0bc839091045888712713e75cb8309f6d89841
454c97d272220e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759142164814577260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4db117533d3015e6a9220366ff1ceefe1f010bcdf2f8570c4f
92873db68b73cd,PodSandboxId:9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759142164800803103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed5dd4c866f42be3e3de5673948bc41285cc1aa68080ca19b9cc4db61be112a,PodSandboxId:d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759142164772706634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683119da9f16b7dcd89cbe4f5b4cb2d5be00d01afd630abc745ea2e4a5909caa,PodSandboxId:a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759142160978984238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0581d84242ded174512127eb8e83baa3cfc5507e63f9a35e26d78ee58e66d0,PodSandboxId:e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759142160182540582,Lab
els:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b9d418c-8d84-4a6f-b6e9-c3d32c7344d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.145463087Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=041aedd6-4d8a-462e-95ac-9e6ad2b9ff8d name=/runtime.v1.RuntimeService/Version
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.145657438Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=041aedd6-4d8a-462e-95ac-9e6ad2b9ff8d name=/runtime.v1.RuntimeService/Version
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.147039821Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=439cefaa-fd42-4b5a-96d1-b8157f5c7f76 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.147701044Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759142603147675167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=439cefaa-fd42-4b5a-96d1-b8157f5c7f76 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.149010942Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=875c0fc7-f744-4698-865d-18f2dc142e8e name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.149349886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=875c0fc7-f744-4698-865d-18f2dc142e8e name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.150273384Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bce2ed56f12be0eae070eadfc38e29f357517cad2fd6165ab487c6120688d9b,PodSandboxId:b441e09c6ef2d85697a1766bd612b4bc9f280229f01444a0a2bf5bce9cb85d1a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759142267674479435,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 818a1168-13eb-40e5-a11e-ed073c8ca85f,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cded293cdc57ead1f28a98da4250d1baf57a2d59a9a93f1d3ee2372dd051ef9b,PodSandboxId:edaa6178cac15b67b074c8c50398d9aad7f133dc6c25e535430bd7a0ce288991,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759142213671533532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c35b0ae21a19f783260f4ba368b53beb2fca7d75595d46d397886bc1018a11,PodSandboxId:6aaf8d34752c12e0e82b95635ab96099dccecef966383a32e03cc2511abd751b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759142213393501916,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2c5f49c9d29c0f3ad3c29c93e2fa675c3c78f618b93494189ca0e15d4171ad6,PodSandboxId:e090db4eef6fe986ce3cca0412b997b8f49d951f4538cae30223710ed8bb293b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759142213363502977,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4deb47b3c02873f7ea4b7a1d04550ee0b7d35c8ea854d454513b6e2cbf954c75,PodSandboxId:d03cfb836c6eb825dfd36aeff2559674feffef4d71647a4fa5e40841f7caa6d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759142208644604932,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012a907cd467a90f54ee8123eaaa32be,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3e6e3f4b5ff111dd1a8ac7df7c60d120ddc205a5b69aeeb209e487a8e405bf,PodSandboxId:4712c91e647ceaf8f356de2bbf7458284f050c978c523cfc8ad352aa21e1d4f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c766
0634,State:CONTAINER_RUNNING,CreatedAt:1759142208594419867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6959f01174e974ade40c3bfc16a814dfb166bdc4f86d4036198ad04d3c51951b,PodSandboxId:8bb4cbce3d4d8ab85fb40f35ec5dc3953224be17b5a81fa59525219e48857513,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759142208583538318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787caf5fb5ad1e85135ce6b6eed843c8946cc916e1dea741a37bf72c666360ad,PodSandboxId:28f527c20559fbf462b7e6f663362919ff57950165ff0336c8ad8d31761fb58f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,}
,Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759142208563010293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a55ba8fa0366e66f40d41eee3f65187820205a686f93eba5c1898309806407,PodSandboxId:4c4858d2471eff7566113f9c7c7352ad8f4b
ff95ac40341dc662526bed7fe51f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759142168428147313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0b6a8579d1174446415645dfdbe88cb1e73c10668c2e2916710fdd235bbffc,PodSandboxId:a573144cc0c0bc839091045888712713e75cb8309f6d89841
454c97d272220e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759142164814577260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4db117533d3015e6a9220366ff1ceefe1f010bcdf2f8570c4f
92873db68b73cd,PodSandboxId:9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759142164800803103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed5dd4c866f42be3e3de5673948bc41285cc1aa68080ca19b9cc4db61be112a,PodSandboxId:d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759142164772706634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683119da9f16b7dcd89cbe4f5b4cb2d5be00d01afd630abc745ea2e4a5909caa,PodSandboxId:a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759142160978984238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0581d84242ded174512127eb8e83baa3cfc5507e63f9a35e26d78ee58e66d0,PodSandboxId:e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759142160182540582,Lab
els:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=875c0fc7-f744-4698-865d-18f2dc142e8e name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.187881743Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2bd8c85-0a41-412d-a6b9-d8e9ad4a121a name=/runtime.v1.RuntimeService/Version
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.188057084Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2bd8c85-0a41-412d-a6b9-d8e9ad4a121a name=/runtime.v1.RuntimeService/Version
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.189851047Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0991ce33-3645-48bb-b54a-473118a418ad name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.190439520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759142603190419139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0991ce33-3645-48bb-b54a-473118a418ad name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.191091701Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ffd3dee3-2db2-447c-bea1-310730aee3d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.191434785Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ffd3dee3-2db2-447c-bea1-310730aee3d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.191965818Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bce2ed56f12be0eae070eadfc38e29f357517cad2fd6165ab487c6120688d9b,PodSandboxId:b441e09c6ef2d85697a1766bd612b4bc9f280229f01444a0a2bf5bce9cb85d1a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759142267674479435,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 818a1168-13eb-40e5-a11e-ed073c8ca85f,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cded293cdc57ead1f28a98da4250d1baf57a2d59a9a93f1d3ee2372dd051ef9b,PodSandboxId:edaa6178cac15b67b074c8c50398d9aad7f133dc6c25e535430bd7a0ce288991,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759142213671533532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c35b0ae21a19f783260f4ba368b53beb2fca7d75595d46d397886bc1018a11,PodSandboxId:6aaf8d34752c12e0e82b95635ab96099dccecef966383a32e03cc2511abd751b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759142213393501916,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2c5f49c9d29c0f3ad3c29c93e2fa675c3c78f618b93494189ca0e15d4171ad6,PodSandboxId:e090db4eef6fe986ce3cca0412b997b8f49d951f4538cae30223710ed8bb293b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759142213363502977,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4deb47b3c02873f7ea4b7a1d04550ee0b7d35c8ea854d454513b6e2cbf954c75,PodSandboxId:d03cfb836c6eb825dfd36aeff2559674feffef4d71647a4fa5e40841f7caa6d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759142208644604932,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012a907cd467a90f54ee8123eaaa32be,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3e6e3f4b5ff111dd1a8ac7df7c60d120ddc205a5b69aeeb209e487a8e405bf,PodSandboxId:4712c91e647ceaf8f356de2bbf7458284f050c978c523cfc8ad352aa21e1d4f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c766
0634,State:CONTAINER_RUNNING,CreatedAt:1759142208594419867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6959f01174e974ade40c3bfc16a814dfb166bdc4f86d4036198ad04d3c51951b,PodSandboxId:8bb4cbce3d4d8ab85fb40f35ec5dc3953224be17b5a81fa59525219e48857513,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759142208583538318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787caf5fb5ad1e85135ce6b6eed843c8946cc916e1dea741a37bf72c666360ad,PodSandboxId:28f527c20559fbf462b7e6f663362919ff57950165ff0336c8ad8d31761fb58f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,}
,Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759142208563010293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a55ba8fa0366e66f40d41eee3f65187820205a686f93eba5c1898309806407,PodSandboxId:4c4858d2471eff7566113f9c7c7352ad8f4b
ff95ac40341dc662526bed7fe51f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759142168428147313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0b6a8579d1174446415645dfdbe88cb1e73c10668c2e2916710fdd235bbffc,PodSandboxId:a573144cc0c0bc839091045888712713e75cb8309f6d89841
454c97d272220e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759142164814577260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4db117533d3015e6a9220366ff1ceefe1f010bcdf2f8570c4f
92873db68b73cd,PodSandboxId:9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759142164800803103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed5dd4c866f42be3e3de5673948bc41285cc1aa68080ca19b9cc4db61be112a,PodSandboxId:d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759142164772706634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683119da9f16b7dcd89cbe4f5b4cb2d5be00d01afd630abc745ea2e4a5909caa,PodSandboxId:a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759142160978984238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0581d84242ded174512127eb8e83baa3cfc5507e63f9a35e26d78ee58e66d0,PodSandboxId:e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759142160182540582,Lab
els:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ffd3dee3-2db2-447c-bea1-310730aee3d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.226615513Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5a04e746-73e9-48ba-be11-7ba0b39a5d0c name=/runtime.v1.RuntimeService/Version
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.226702893Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5a04e746-73e9-48ba-be11-7ba0b39a5d0c name=/runtime.v1.RuntimeService/Version
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.228146655Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76876b07-02e2-4574-93a1-7d857e388a48 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.228954455Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759142603228928796,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76876b07-02e2-4574-93a1-7d857e388a48 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.229732783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f94be8c7-0510-4d9b-94a9-7df0af0014fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.229931241Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f94be8c7-0510-4d9b-94a9-7df0af0014fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:43:23 functional-960153 crio[5468]: time="2025-09-29 10:43:23.230512471Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bce2ed56f12be0eae070eadfc38e29f357517cad2fd6165ab487c6120688d9b,PodSandboxId:b441e09c6ef2d85697a1766bd612b4bc9f280229f01444a0a2bf5bce9cb85d1a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759142267674479435,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 818a1168-13eb-40e5-a11e-ed073c8ca85f,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cded293cdc57ead1f28a98da4250d1baf57a2d59a9a93f1d3ee2372dd051ef9b,PodSandboxId:edaa6178cac15b67b074c8c50398d9aad7f133dc6c25e535430bd7a0ce288991,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759142213671533532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c35b0ae21a19f783260f4ba368b53beb2fca7d75595d46d397886bc1018a11,PodSandboxId:6aaf8d34752c12e0e82b95635ab96099dccecef966383a32e03cc2511abd751b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759142213393501916,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2c5f49c9d29c0f3ad3c29c93e2fa675c3c78f618b93494189ca0e15d4171ad6,PodSandboxId:e090db4eef6fe986ce3cca0412b997b8f49d951f4538cae30223710ed8bb293b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759142213363502977,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4deb47b3c02873f7ea4b7a1d04550ee0b7d35c8ea854d454513b6e2cbf954c75,PodSandboxId:d03cfb836c6eb825dfd36aeff2559674feffef4d71647a4fa5e40841f7caa6d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759142208644604932,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012a907cd467a90f54ee8123eaaa32be,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3e6e3f4b5ff111dd1a8ac7df7c60d120ddc205a5b69aeeb209e487a8e405bf,PodSandboxId:4712c91e647ceaf8f356de2bbf7458284f050c978c523cfc8ad352aa21e1d4f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c766
0634,State:CONTAINER_RUNNING,CreatedAt:1759142208594419867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6959f01174e974ade40c3bfc16a814dfb166bdc4f86d4036198ad04d3c51951b,PodSandboxId:8bb4cbce3d4d8ab85fb40f35ec5dc3953224be17b5a81fa59525219e48857513,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759142208583538318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787caf5fb5ad1e85135ce6b6eed843c8946cc916e1dea741a37bf72c666360ad,PodSandboxId:28f527c20559fbf462b7e6f663362919ff57950165ff0336c8ad8d31761fb58f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,}
,Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759142208563010293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a55ba8fa0366e66f40d41eee3f65187820205a686f93eba5c1898309806407,PodSandboxId:4c4858d2471eff7566113f9c7c7352ad8f4b
ff95ac40341dc662526bed7fe51f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759142168428147313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0b6a8579d1174446415645dfdbe88cb1e73c10668c2e2916710fdd235bbffc,PodSandboxId:a573144cc0c0bc839091045888712713e75cb8309f6d89841
454c97d272220e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759142164814577260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4db117533d3015e6a9220366ff1ceefe1f010bcdf2f8570c4f
92873db68b73cd,PodSandboxId:9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759142164800803103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed5dd4c866f42be3e3de5673948bc41285cc1aa68080ca19b9cc4db61be112a,PodSandboxId:d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759142164772706634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683119da9f16b7dcd89cbe4f5b4cb2d5be00d01afd630abc745ea2e4a5909caa,PodSandboxId:a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759142160978984238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0581d84242ded174512127eb8e83baa3cfc5507e63f9a35e26d78ee58e66d0,PodSandboxId:e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759142160182540582,Lab
els:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f94be8c7-0510-4d9b-94a9-7df0af0014fb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2bce2ed56f12b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   b441e09c6ef2d       busybox-mount
	cded293cdc57e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      6 minutes ago       Running             coredns                   2                   edaa6178cac15       coredns-66bc5c9577-ldskd
	32c35b0ae21a1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       3                   6aaf8d34752c1       storage-provisioner
	b2c5f49c9d29c       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      6 minutes ago       Running             kube-proxy                2                   e090db4eef6fe       kube-proxy-wmdfj
	4deb47b3c0287       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      6 minutes ago       Running             kube-apiserver            0                   d03cfb836c6eb       kube-apiserver-functional-960153
	5e3e6e3f4b5ff       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      6 minutes ago       Running             kube-controller-manager   3                   4712c91e647ce       kube-controller-manager-functional-960153
	6959f01174e97       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      6 minutes ago       Running             kube-scheduler            3                   8bb4cbce3d4d8       kube-scheduler-functional-960153
	787caf5fb5ad1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      6 minutes ago       Running             etcd                      3                   28f527c20559f       etcd-functional-960153
	c8a55ba8fa036       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Exited              storage-provisioner       2                   4c4858d2471ef       storage-provisioner
	1b0b6a8579d11       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      7 minutes ago       Exited              etcd                      2                   a573144cc0c0b       etcd-functional-960153
	4db117533d301       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      7 minutes ago       Exited              kube-scheduler            2                   9c20e6f953a18       kube-scheduler-functional-960153
	1ed5dd4c866f4       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      7 minutes ago       Exited              kube-controller-manager   2                   d6b292ebe7e92       kube-controller-manager-functional-960153
	683119da9f16b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Exited              coredns                   1                   a203d4614c54e       coredns-66bc5c9577-ldskd
	2d0581d84242d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      7 minutes ago       Exited              kube-proxy                1                   e55c0fbe79eb9       kube-proxy-wmdfj
	
	
	==> coredns [683119da9f16b7dcd89cbe4f5b4cb2d5be00d01afd630abc745ea2e4a5909caa] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57545 - 19273 "HINFO IN 5553420383368812737.2946601077225657136. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.47213075s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cded293cdc57ead1f28a98da4250d1baf57a2d59a9a93f1d3ee2372dd051ef9b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59083 - 18141 "HINFO IN 5463811549496456981.4073937826615656044. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063852137s
	
	
	==> describe nodes <==
	Name:               functional-960153
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-960153
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c703192fb7638284bed1945941837d6f5d9e8170
	                    minikube.k8s.io/name=functional-960153
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_35_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:35:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-960153
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 10:43:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 10:38:24 +0000   Mon, 29 Sep 2025 10:35:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 10:38:24 +0000   Mon, 29 Sep 2025 10:35:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 10:38:24 +0000   Mon, 29 Sep 2025 10:35:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 10:38:24 +0000   Mon, 29 Sep 2025 10:35:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    functional-960153
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 9f88164a3a16454d87ec4803e7696424
	  System UUID:                9f88164a-3a16-454d-87ec-4803e7696424
	  Boot ID:                    52ac99b4-d685-43b7-aae7-7d644d51c516
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-6pbhb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  default                     hello-node-connect-7d85dfc575-rbtgs           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  default                     mysql-5bb876957f-9bzpm                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    6m9s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-66bc5c9577-ldskd                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m10s
	  kube-system                 etcd-functional-960153                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m15s
	  kube-system                 kube-apiserver-functional-960153              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-controller-manager-functional-960153     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m15s
	  kube-system                 kube-proxy-wmdfj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 kube-scheduler-functional-960153              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m15s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m9s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-hbwbt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vfnm6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m8s                   kube-proxy       
	  Normal  Starting                 6m29s                  kube-proxy       
	  Normal  Starting                 7m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m23s (x8 over 8m23s)  kubelet          Node functional-960153 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m23s (x8 over 8m23s)  kubelet          Node functional-960153 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m23s (x7 over 8m23s)  kubelet          Node functional-960153 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m16s                  kubelet          Starting kubelet.
	  Normal  NodeReady                8m15s                  kubelet          Node functional-960153 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  8m15s                  kubelet          Node functional-960153 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m15s                  kubelet          Node functional-960153 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m15s                  kubelet          Node functional-960153 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m11s                  node-controller  Node functional-960153 event: Registered Node functional-960153 in Controller
	  Normal  NodeHasSufficientPID     7m19s (x7 over 7m19s)  kubelet          Node functional-960153 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m19s (x8 over 7m19s)  kubelet          Node functional-960153 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m19s (x8 over 7m19s)  kubelet          Node functional-960153 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m19s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m12s                  node-controller  Node functional-960153 event: Registered Node functional-960153 in Controller
	  Normal  Starting                 6m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m36s (x8 over 6m36s)  kubelet          Node functional-960153 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m36s (x8 over 6m36s)  kubelet          Node functional-960153 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m36s (x7 over 6m36s)  kubelet          Node functional-960153 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m28s                  node-controller  Node functional-960153 event: Registered Node functional-960153 in Controller
	
	
	==> dmesg <==
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000062] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.009859] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.190779] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085188] kauditd_printk_skb: 1 callbacks suppressed
	[Sep29 10:35] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.138189] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.494668] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.966317] kauditd_printk_skb: 249 callbacks suppressed
	[ +20.548723] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.109948] kauditd_printk_skb: 11 callbacks suppressed
	[Sep29 10:36] kauditd_printk_skb: 337 callbacks suppressed
	[  +0.739809] kauditd_printk_skb: 93 callbacks suppressed
	[ +14.865249] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.108609] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.992645] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.562406] kauditd_printk_skb: 164 callbacks suppressed
	[Sep29 10:37] kauditd_printk_skb: 133 callbacks suppressed
	[  +2.047433] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.000167] kauditd_printk_skb: 68 callbacks suppressed
	[ +23.144420] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.146482] kauditd_printk_skb: 31 callbacks suppressed
	[Sep29 10:39] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [1b0b6a8579d1174446415645dfdbe88cb1e73c10668c2e2916710fdd235bbffc] <==
	{"level":"warn","ts":"2025-09-29T10:36:07.224736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.233985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.242525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.254684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.269603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.279823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.391154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51468","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:36:33.464966Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T10:36:33.465048Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-960153","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.210:2380"],"advertise-client-urls":["https://192.168.39.210:2379"]}
	{"level":"error","ts":"2025-09-29T10:36:33.465141Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T10:36:33.541896Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T10:36:33.543548Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:36:33.543608Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"5a5dd032def1271d","current-leader-member-id":"5a5dd032def1271d"}
	{"level":"info","ts":"2025-09-29T10:36:33.543700Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-29T10:36:33.543743Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-29T10:36:33.543884Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T10:36:33.543974Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T10:36:33.543986Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T10:36:33.544026Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.210:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T10:36:33.544033Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.210:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T10:36:33.544039Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.210:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:36:33.547342Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.210:2380"}
	{"level":"error","ts":"2025-09-29T10:36:33.547405Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.210:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:36:33.547444Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.210:2380"}
	{"level":"info","ts":"2025-09-29T10:36:33.547452Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-960153","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.210:2380"],"advertise-client-urls":["https://192.168.39.210:2379"]}
	
	
	==> etcd [787caf5fb5ad1e85135ce6b6eed843c8946cc916e1dea741a37bf72c666360ad] <==
	{"level":"warn","ts":"2025-09-29T10:36:50.702178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.710496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.718423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.744043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.746453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.770274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.776634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.791479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.817968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.853822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.865483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.882955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.914507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.928089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.946754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.956789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.965927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.981978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.003127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.017773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.030414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.050869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.065427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.095802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.194410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60022","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:43:23 up 8 min,  0 users,  load average: 0.12, 0.29, 0.22
	Linux functional-960153 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4deb47b3c02873f7ea4b7a1d04550ee0b7d35c8ea854d454513b6e2cbf954c75] <==
	I0929 10:36:52.787576       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0929 10:36:53.490012       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 10:36:53.586387       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 10:36:53.632575       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 10:36:53.648168       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 10:36:55.420498       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 10:36:55.521903       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 10:37:09.576805       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.227.93"}
	I0929 10:37:14.540431       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.250.229"}
	I0929 10:37:14.609964       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 10:37:23.840263       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.209.15"}
	I0929 10:37:54.198010       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:37:55.134922       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 10:37:55.418687       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.170.200"}
	I0929 10:37:55.448292       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.136.103"}
	I0929 10:38:04.450387       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:39:02.514133       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:39:06.401949       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:40:12.302143       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:40:34.124990       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:41:39.596788       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:41:40.694238       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:42:43.038044       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:42:49.727735       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:42:56.719384       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.130.23"}
	
	
	==> kube-controller-manager [1ed5dd4c866f42be3e3de5673948bc41285cc1aa68080ca19b9cc4db61be112a] <==
	I0929 10:36:11.372935       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 10:36:11.374520       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 10:36:11.378813       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 10:36:11.379357       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 10:36:11.383757       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 10:36:11.387079       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 10:36:11.390356       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 10:36:11.391517       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 10:36:11.393753       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 10:36:11.393858       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 10:36:11.394918       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:36:11.399254       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 10:36:11.399519       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 10:36:11.400457       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 10:36:11.403754       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 10:36:11.412038       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 10:36:11.412062       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 10:36:11.422408       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 10:36:11.422447       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 10:36:11.422640       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:36:11.422717       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 10:36:11.422741       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 10:36:11.422896       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 10:36:11.424512       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:36:11.425024       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	
	
	==> kube-controller-manager [5e3e6e3f4b5ff111dd1a8ac7df7c60d120ddc205a5b69aeeb209e487a8e405bf] <==
	I0929 10:36:55.304581       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 10:36:55.304597       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 10:36:55.306874       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 10:36:55.317098       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 10:36:55.317620       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 10:36:55.317734       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 10:36:55.318399       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 10:36:55.323169       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 10:36:55.323288       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 10:36:55.325615       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:36:55.325627       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 10:36:55.325633       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 10:36:55.326568       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 10:36:55.326655       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 10:36:55.326762       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-960153"
	I0929 10:36:55.326797       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 10:36:55.327156       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 10:36:55.331436       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	E0929 10:37:55.227948       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.251462       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.257857       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.265371       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.270669       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.275541       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.280040       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [2d0581d84242ded174512127eb8e83baa3cfc5507e63f9a35e26d78ee58e66d0] <==
	E0929 10:36:04.895610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-960153&limit=500&resourceVersion=0\": dial tcp 192.168.39.210:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0929 10:36:10.013346       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:36:10.013415       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.210"]
	E0929 10:36:10.013474       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:36:10.087391       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 10:36:10.087968       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 10:36:10.087999       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:36:10.117148       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:36:10.117840       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:36:10.117855       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:10.128465       1 config.go:200] "Starting service config controller"
	I0929 10:36:10.128494       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:36:10.128511       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:36:10.128515       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:36:10.128524       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:36:10.128526       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:36:10.128861       1 config.go:309] "Starting node config controller"
	I0929 10:36:10.136281       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:36:10.136290       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:36:10.229025       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:36:10.229073       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:36:10.229106       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b2c5f49c9d29c0f3ad3c29c93e2fa675c3c78f618b93494189ca0e15d4171ad6] <==
	I0929 10:36:53.971435       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:36:54.074882       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:36:54.078310       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.210"]
	E0929 10:36:54.082955       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:36:54.232412       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 10:36:54.232522       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 10:36:54.232545       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:36:54.300002       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:36:54.300332       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:36:54.300345       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:54.309617       1 config.go:200] "Starting service config controller"
	I0929 10:36:54.309844       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:36:54.310029       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:36:54.310175       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:36:54.310307       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:36:54.310396       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:36:54.322063       1 config.go:309] "Starting node config controller"
	I0929 10:36:54.358289       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:36:54.358327       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:36:54.411926       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:36:54.411968       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:36:54.412002       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4db117533d3015e6a9220366ff1ceefe1f010bcdf2f8570c4f92873db68b73cd] <==
	I0929 10:36:06.923917       1 serving.go:386] Generated self-signed cert in-memory
	I0929 10:36:08.191074       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 10:36:08.191117       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:08.214607       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0929 10:36:08.214735       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 10:36:08.214795       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:08.214817       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:08.214840       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:08.214855       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:08.216911       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 10:36:08.217318       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 10:36:08.316254       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:08.316542       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:08.317687       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0929 10:36:33.483990       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 10:36:33.488911       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 10:36:33.488954       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:33.488971       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:33.495440       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 10:36:33.495471       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 10:36:33.495508       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [6959f01174e974ade40c3bfc16a814dfb166bdc4f86d4036198ad04d3c51951b] <==
	I0929 10:36:49.551846       1 serving.go:386] Generated self-signed cert in-memory
	I0929 10:36:51.969129       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 10:36:51.969173       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:51.983643       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 10:36:51.983753       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0929 10:36:51.983782       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 10:36:51.983816       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 10:36:51.990405       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:51.990446       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:51.990462       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:51.990468       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:52.083915       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0929 10:36:52.090742       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:52.090863       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 10:42:37 functional-960153 kubelet[5808]: E0929 10:42:37.537246    5808 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-rbtgs_default(76bcc9f3-165d-4de2-a963-90eb71d2cdfa): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 10:42:37 functional-960153 kubelet[5808]: E0929 10:42:37.537286    5808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-rbtgs" podUID="76bcc9f3-165d-4de2-a963-90eb71d2cdfa"
	Sep 29 10:42:37 functional-960153 kubelet[5808]: E0929 10:42:37.886070    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142557883730747  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:42:37 functional-960153 kubelet[5808]: E0929 10:42:37.886115    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142557883730747  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:42:47 functional-960153 kubelet[5808]: E0929 10:42:47.771411    5808 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod221fcbdd73ebea579595982187f9964d/crio-9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766: Error finding container 9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766: Status 404 returned error can't find the container with id 9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766
	Sep 29 10:42:47 functional-960153 kubelet[5808]: E0929 10:42:47.771867    5808 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod0c124297-4905-4a35-9473-4bd1b565e373/crio-a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba: Error finding container a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba: Status 404 returned error can't find the container with id a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba
	Sep 29 10:42:47 functional-960153 kubelet[5808]: E0929 10:42:47.772178    5808 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod180156b943983a6e5b8f074dd62185b8/crio-d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360: Error finding container d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360: Status 404 returned error can't find the container with id d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360
	Sep 29 10:42:47 functional-960153 kubelet[5808]: E0929 10:42:47.772509    5808 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod3581457d-4db8-4128-a3eb-f27614ec4c96/crio-4c4858d2471eff7566113f9c7c7352ad8f4bff95ac40341dc662526bed7fe51f: Error finding container 4c4858d2471eff7566113f9c7c7352ad8f4bff95ac40341dc662526bed7fe51f: Status 404 returned error can't find the container with id 4c4858d2471eff7566113f9c7c7352ad8f4bff95ac40341dc662526bed7fe51f
	Sep 29 10:42:47 functional-960153 kubelet[5808]: E0929 10:42:47.772849    5808 manager.go:1116] Failed to create existing container: /kubepods/burstable/podd1bee20c8d58d621b4427e7252264eba/crio-a573144cc0c0bc839091045888712713e75cb8309f6d89841454c97d272220e0: Error finding container a573144cc0c0bc839091045888712713e75cb8309f6d89841454c97d272220e0: Status 404 returned error can't find the container with id a573144cc0c0bc839091045888712713e75cb8309f6d89841454c97d272220e0
	Sep 29 10:42:47 functional-960153 kubelet[5808]: E0929 10:42:47.773097    5808 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod3eca0381-2478-4fd7-8b49-076c58cca999/crio-e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622: Error finding container e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622: Status 404 returned error can't find the container with id e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622
	Sep 29 10:42:47 functional-960153 kubelet[5808]: E0929 10:42:47.888327    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142567887744520  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:42:47 functional-960153 kubelet[5808]: E0929 10:42:47.888357    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142567887744520  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:42:49 functional-960153 kubelet[5808]: E0929 10:42:49.674565    5808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-rbtgs" podUID="76bcc9f3-165d-4de2-a963-90eb71d2cdfa"
	Sep 29 10:42:56 functional-960153 kubelet[5808]: I0929 10:42:56.825535    5808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd7j6\" (UniqueName: \"kubernetes.io/projected/1b020a16-1b00-476c-9224-36952996737f-kube-api-access-zd7j6\") pod \"hello-node-75c85bcc94-6pbhb\" (UID: \"1b020a16-1b00-476c-9224-36952996737f\") " pod="default/hello-node-75c85bcc94-6pbhb"
	Sep 29 10:42:57 functional-960153 kubelet[5808]: E0929 10:42:57.891508    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142577890377740  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:42:57 functional-960153 kubelet[5808]: E0929 10:42:57.891531    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142577890377740  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:43:07 functional-960153 kubelet[5808]: E0929 10:43:07.893671    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142587893383680  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:43:07 functional-960153 kubelet[5808]: E0929 10:43:07.893695    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142587893383680  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:43:08 functional-960153 kubelet[5808]: E0929 10:43:08.192755    5808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 10:43:08 functional-960153 kubelet[5808]: E0929 10:43:08.192821    5808 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 10:43:08 functional-960153 kubelet[5808]: E0929 10:43:08.193036    5808 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-hbwbt_kubernetes-dashboard(4f05ae5d-538c-490e-a23d-d19f009ffb42): ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 10:43:08 functional-960153 kubelet[5808]: E0929 10:43:08.193098    5808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-hbwbt" podUID="4f05ae5d-538c-490e-a23d-d19f009ffb42"
	Sep 29 10:43:17 functional-960153 kubelet[5808]: E0929 10:43:17.896058    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142597895083142  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:43:17 functional-960153 kubelet[5808]: E0929 10:43:17.896105    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142597895083142  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 29 10:43:18 functional-960153 kubelet[5808]: E0929 10:43:18.676982    5808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-hbwbt" podUID="4f05ae5d-538c-490e-a23d-d19f009ffb42"
	
	
	==> storage-provisioner [32c35b0ae21a19f783260f4ba368b53beb2fca7d75595d46d397886bc1018a11] <==
	W0929 10:42:58.991014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:00.994965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:01.000325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:03.003829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:03.010590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:05.013842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:05.019607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:07.022840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:07.027968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:09.031040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:09.035421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:11.038630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:11.043833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:13.048409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:13.053067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:15.056248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:15.065801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:17.069276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:17.074691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:19.078271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:19.083717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:21.087173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:21.091888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:23.098145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:43:23.107561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c8a55ba8fa0366e66f40d41eee3f65187820205a686f93eba5c1898309806407] <==
	I0929 10:36:08.500670       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 10:36:08.509633       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 10:36:08.509683       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0929 10:36:08.512109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:11.968139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:16.228764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:19.827667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:22.881892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:25.906036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:25.912008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 10:36:25.913089       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 10:36:25.913543       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"25412f40-1675-4ca1-a896-dcfa19247807", APIVersion:"v1", ResourceVersion:"538", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-960153_65f4ca3f-4720-4696-9b70-1b21f4e35fd1 became leader
	I0929 10:36:25.913623       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-960153_65f4ca3f-4720-4696-9b70-1b21f4e35fd1!
	W0929 10:36:25.921899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:25.932418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 10:36:26.013863       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-960153_65f4ca3f-4720-4696-9b70-1b21f4e35fd1!
	W0929 10:36:27.936027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:27.941131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:29.945646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:29.952337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:31.955062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:31.959717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-960153 -n functional-960153
helpers_test.go:269: (dbg) Run:  kubectl --context functional-960153 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-6pbhb hello-node-connect-7d85dfc575-rbtgs mysql-5bb876957f-9bzpm sp-pod dashboard-metrics-scraper-77bf4d6c4c-hbwbt kubernetes-dashboard-855c9754f9-vfnm6
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-960153 describe pod busybox-mount hello-node-75c85bcc94-6pbhb hello-node-connect-7d85dfc575-rbtgs mysql-5bb876957f-9bzpm sp-pod dashboard-metrics-scraper-77bf4d6c4c-hbwbt kubernetes-dashboard-855c9754f9-vfnm6
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-960153 describe pod busybox-mount hello-node-75c85bcc94-6pbhb hello-node-connect-7d85dfc575-rbtgs mysql-5bb876957f-9bzpm sp-pod dashboard-metrics-scraper-77bf4d6c4c-hbwbt kubernetes-dashboard-855c9754f9-vfnm6: exit status 1 (104.687374ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960153/192.168.39.210
	Start Time:       Mon, 29 Sep 2025 10:37:17 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://2bce2ed56f12be0eae070eadfc38e29f357517cad2fd6165ab487c6120688d9b
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 10:37:47 +0000
	      Finished:     Mon, 29 Sep 2025 10:37:47 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b7v9g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-b7v9g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m7s   default-scheduler  Successfully assigned default/busybox-mount to functional-960153
	  Normal  Pulling    6m7s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m37s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.167s (29.766s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m37s  kubelet            Created container: mount-munger
	  Normal  Started    5m37s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-6pbhb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960153/192.168.39.210
	Start Time:       Mon, 29 Sep 2025 10:42:56 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zd7j6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zd7j6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  27s   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-6pbhb to functional-960153
	  Normal  Pulling    27s   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-rbtgs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960153/192.168.39.210
	Start Time:       Mon, 29 Sep 2025 10:37:23 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zd4fw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zd4fw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  6m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-rbtgs to functional-960153
	  Warning  Failed     4m5s                kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     47s (x2 over 4m5s)  kubelet            Error: ErrImagePull
	  Warning  Failed     47s                 kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    35s (x2 over 4m5s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     35s (x2 over 4m5s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    24s (x3 over 6m)    kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-9bzpm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960153/192.168.39.210
	Start Time:       Mon, 29 Sep 2025 10:37:14 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ds57p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ds57p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m9s                  default-scheduler  Successfully assigned default/mysql-5bb876957f-9bzpm to functional-960153
	  Warning  Failed     5m38s                 kubelet            Failed to pull image "docker.io/mysql:5.7": copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m3s (x2 over 5m38s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m3s                  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    112s (x2 over 5m37s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     112s (x2 over 5m37s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    100s (x3 over 6m9s)   kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960153/192.168.39.210
	Start Time:       Mon, 29 Sep 2025 10:37:22 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5jh4w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-5jh4w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-960153
	  Warning  Failed     4m36s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     92s (x2 over 4m36s)  kubelet            Error: ErrImagePull
	  Warning  Failed     92s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    81s (x2 over 4m35s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     81s (x2 over 4m35s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    67s (x3 over 6m2s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-hbwbt" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vfnm6" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-960153 describe pod busybox-mount hello-node-75c85bcc94-6pbhb hello-node-connect-7d85dfc575-rbtgs mysql-5bb876957f-9bzpm sp-pod dashboard-metrics-scraper-77bf4d6c4c-hbwbt kubernetes-dashboard-855c9754f9-vfnm6: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (367.92s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-960153 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-9bzpm" [65d5a92b-6560-4fff-aa3f-d76c657d9aef] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-960153 -n functional-960153
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-09-29 10:47:14.895985402 +0000 UTC m=+1658.962061179
functional_test.go:1804: (dbg) Run:  kubectl --context functional-960153 describe po mysql-5bb876957f-9bzpm -n default
functional_test.go:1804: (dbg) kubectl --context functional-960153 describe po mysql-5bb876957f-9bzpm -n default:
Name:             mysql-5bb876957f-9bzpm
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-960153/192.168.39.210
Start Time:       Mon, 29 Sep 2025 10:37:14 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ds57p (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-ds57p:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-9bzpm to functional-960153
Warning  Failed     9m28s                  kubelet            Failed to pull image "docker.io/mysql:5.7": copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     5m53s                  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m35s (x3 over 9m28s)  kubelet            Error: ErrImagePull
Warning  Failed     2m35s                  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    2m7s (x4 over 9m27s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     2m7s (x4 over 9m27s)   kubelet            Error: ImagePullBackOff
Normal   Pulling    114s (x4 over 9m59s)   kubelet            Pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-960153 logs mysql-5bb876957f-9bzpm -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-960153 logs mysql-5bb876957f-9bzpm -n default: exit status 1 (75.336127ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-9bzpm" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-960153 logs mysql-5bb876957f-9bzpm -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-960153 -n functional-960153
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-960153 logs -n 25: (1.517459872s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-960153 ssh sudo umount -f /mount-9p                                                                                    │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh            │ functional-960153 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ mount          │ -p functional-960153 /tmp/TestFunctionalparallelMountCmdspecific-port2145470094/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ ssh            │ functional-960153 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh            │ functional-960153 ssh -- ls -la /mount-9p                                                                                         │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh            │ functional-960153 ssh sudo umount -f /mount-9p                                                                                    │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ mount          │ -p functional-960153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3951226267/001:/mount1 --alsologtostderr -v=1                │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ mount          │ -p functional-960153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3951226267/001:/mount3 --alsologtostderr -v=1                │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ ssh            │ functional-960153 ssh findmnt -T /mount1                                                                                          │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ mount          │ -p functional-960153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3951226267/001:/mount2 --alsologtostderr -v=1                │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ ssh            │ functional-960153 ssh findmnt -T /mount1                                                                                          │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh            │ functional-960153 ssh findmnt -T /mount2                                                                                          │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ ssh            │ functional-960153 ssh findmnt -T /mount3                                                                                          │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │ 29 Sep 25 10:37 UTC │
	│ mount          │ -p functional-960153 --kill=true                                                                                                  │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-960153 --alsologtostderr -v=1                                                                    │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:37 UTC │                     │
	│ update-context │ functional-960153 update-context --alsologtostderr -v=2                                                                           │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:43 UTC │ 29 Sep 25 10:43 UTC │
	│ update-context │ functional-960153 update-context --alsologtostderr -v=2                                                                           │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:43 UTC │ 29 Sep 25 10:43 UTC │
	│ update-context │ functional-960153 update-context --alsologtostderr -v=2                                                                           │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:43 UTC │ 29 Sep 25 10:43 UTC │
	│ image          │ functional-960153 image ls --format short --alsologtostderr                                                                       │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:43 UTC │ 29 Sep 25 10:43 UTC │
	│ image          │ functional-960153 image ls --format yaml --alsologtostderr                                                                        │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:43 UTC │ 29 Sep 25 10:43 UTC │
	│ ssh            │ functional-960153 ssh pgrep buildkitd                                                                                             │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:43 UTC │                     │
	│ image          │ functional-960153 image build -t localhost/my-image:functional-960153 testdata/build --alsologtostderr                            │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:43 UTC │ 29 Sep 25 10:43 UTC │
	│ image          │ functional-960153 image ls                                                                                                        │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:43 UTC │ 29 Sep 25 10:43 UTC │
	│ image          │ functional-960153 image ls --format json --alsologtostderr                                                                        │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:43 UTC │ 29 Sep 25 10:43 UTC │
	│ image          │ functional-960153 image ls --format table --alsologtostderr                                                                       │ functional-960153 │ jenkins │ v1.37.0 │ 29 Sep 25 10:43 UTC │ 29 Sep 25 10:43 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:37:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:37:23.628690   20273 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:37:23.628794   20273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:37:23.628800   20273 out.go:374] Setting ErrFile to fd 2...
	I0929 10:37:23.628807   20273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:37:23.629009   20273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
	I0929 10:37:23.629457   20273 out.go:368] Setting JSON to false
	I0929 10:37:23.630479   20273 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1189,"bootTime":1759141055,"procs":255,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:37:23.630563   20273 start.go:140] virtualization: kvm guest
	I0929 10:37:23.632590   20273 out.go:179] * [functional-960153] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:37:23.633856   20273 notify.go:220] Checking for updates...
	I0929 10:37:23.633923   20273 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 10:37:23.635308   20273 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:37:23.636756   20273 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 10:37:23.638012   20273 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 10:37:23.639149   20273 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:37:23.640480   20273 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:37:23.642081   20273 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:37:23.642490   20273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:37:23.642535   20273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:37:23.655561   20273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35897
	I0929 10:37:23.656012   20273 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:37:23.656519   20273 main.go:141] libmachine: Using API Version  1
	I0929 10:37:23.656539   20273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:37:23.656902   20273 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:37:23.657084   20273 main.go:141] libmachine: (functional-960153) Calling .DriverName
	I0929 10:37:23.657386   20273 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:37:23.657733   20273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:37:23.657770   20273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:37:23.671036   20273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
	I0929 10:37:23.671450   20273 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:37:23.671847   20273 main.go:141] libmachine: Using API Version  1
	I0929 10:37:23.671862   20273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:37:23.672189   20273 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:37:23.672387   20273 main.go:141] libmachine: (functional-960153) Calling .DriverName
	I0929 10:37:23.703728   20273 out.go:179] * Using the kvm2 driver based on existing profile
	I0929 10:37:23.705004   20273 start.go:304] selected driver: kvm2
	I0929 10:37:23.705017   20273 start.go:924] validating driver "kvm2" against &{Name:functional-960153 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:functional-960153 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:37:23.705119   20273 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:37:23.706327   20273 cni.go:84] Creating CNI manager for ""
	I0929 10:37:23.706410   20273 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 10:37:23.706467   20273 start.go:348] cluster config:
	{Name:functional-960153 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-960153 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:37:23.707821   20273 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.825909989Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28599a25-80d9-4543-90da-99df0899e0a6 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.826588709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c2d1567-d4b7-4fd1-9e77-8748cc4b3e69 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.827392710Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c2d1567-d4b7-4fd1-9e77-8748cc4b3e69 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.827744175Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bce2ed56f12be0eae070eadfc38e29f357517cad2fd6165ab487c6120688d9b,PodSandboxId:b441e09c6ef2d85697a1766bd612b4bc9f280229f01444a0a2bf5bce9cb85d1a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759142267674479435,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 818a1168-13eb-40e5-a11e-ed073c8ca85f,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cded293cdc57ead1f28a98da4250d1baf57a2d59a9a93f1d3ee2372dd051ef9b,PodSandboxId:edaa6178cac15b67b074c8c50398d9aad7f133dc6c25e535430bd7a0ce288991,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759142213671533532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c35b0ae21a19f783260f4ba368b53beb2fca7d75595d46d397886bc1018a11,PodSandboxId:6aaf8d34752c12e0e82b95635ab96099dccecef966383a32e03cc2511abd751b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759142213393501916,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2c5f49c9d29c0f3ad3c29c93e2fa675c3c78f618b93494189ca0e15d4171ad6,PodSandboxId:e090db4eef6fe986ce3cca0412b997b8f49d951f4538cae30223710ed8bb293b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759142213363502977,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4deb47b3c02873f7ea4b7a1d04550ee0b7d35c8ea854d454513b6e2cbf954c75,PodSandboxId:d03cfb836c6eb825dfd36aeff2559674feffef4d71647a4fa5e40841f7caa6d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759142208644604932,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012a907cd467a90f54ee8123eaaa32be,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3e6e3f4b5ff111dd1a8ac7df7c60d120ddc205a5b69aeeb209e487a8e405bf,PodSandboxId:4712c91e647ceaf8f356de2bbf7458284f050c978c523cfc8ad352aa21e1d4f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c766
0634,State:CONTAINER_RUNNING,CreatedAt:1759142208594419867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6959f01174e974ade40c3bfc16a814dfb166bdc4f86d4036198ad04d3c51951b,PodSandboxId:8bb4cbce3d4d8ab85fb40f35ec5dc3953224be17b5a81fa59525219e48857513,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759142208583538318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787caf5fb5ad1e85135ce6b6eed843c8946cc916e1dea741a37bf72c666360ad,PodSandboxId:28f527c20559fbf462b7e6f663362919ff57950165ff0336c8ad8d31761fb58f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,}
,Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759142208563010293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a55ba8fa0366e66f40d41eee3f65187820205a686f93eba5c1898309806407,PodSandboxId:4c4858d2471eff7566113f9c7c7352ad8f4b
ff95ac40341dc662526bed7fe51f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759142168428147313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0b6a8579d1174446415645dfdbe88cb1e73c10668c2e2916710fdd235bbffc,PodSandboxId:a573144cc0c0bc839091045888712713e75cb8309f6d89841
454c97d272220e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759142164814577260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4db117533d3015e6a9220366ff1ceefe1f010bcdf2f8570c4f
92873db68b73cd,PodSandboxId:9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759142164800803103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed5dd4c866f42be3e3de5673948bc41285cc1aa68080ca19b9cc4db61be112a,PodSandboxId:d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759142164772706634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683119da9f16b7dcd89cbe4f5b4cb2d5be00d01afd630abc745ea2e4a5909caa,PodSandboxId:a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759142160978984238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0581d84242ded174512127eb8e83baa3cfc5507e63f9a35e26d78ee58e66d0,PodSandboxId:e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759142160182540582,Lab
els:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c2d1567-d4b7-4fd1-9e77-8748cc4b3e69 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.828720588Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8db197c9-8e6e-4e13-9121-f122c96df084 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.830408037Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759142835830385788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201237,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8db197c9-8e6e-4e13-9121-f122c96df084 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.831531382Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=431105bb-c8d7-4000-a192-4a489b8c89b5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.831721198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=431105bb-c8d7-4000-a192-4a489b8c89b5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.832082565Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bce2ed56f12be0eae070eadfc38e29f357517cad2fd6165ab487c6120688d9b,PodSandboxId:b441e09c6ef2d85697a1766bd612b4bc9f280229f01444a0a2bf5bce9cb85d1a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759142267674479435,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 818a1168-13eb-40e5-a11e-ed073c8ca85f,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cded293cdc57ead1f28a98da4250d1baf57a2d59a9a93f1d3ee2372dd051ef9b,PodSandboxId:edaa6178cac15b67b074c8c50398d9aad7f133dc6c25e535430bd7a0ce288991,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759142213671533532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c35b0ae21a19f783260f4ba368b53beb2fca7d75595d46d397886bc1018a11,PodSandboxId:6aaf8d34752c12e0e82b95635ab96099dccecef966383a32e03cc2511abd751b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759142213393501916,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2c5f49c9d29c0f3ad3c29c93e2fa675c3c78f618b93494189ca0e15d4171ad6,PodSandboxId:e090db4eef6fe986ce3cca0412b997b8f49d951f4538cae30223710ed8bb293b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759142213363502977,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4deb47b3c02873f7ea4b7a1d04550ee0b7d35c8ea854d454513b6e2cbf954c75,PodSandboxId:d03cfb836c6eb825dfd36aeff2559674feffef4d71647a4fa5e40841f7caa6d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759142208644604932,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012a907cd467a90f54ee8123eaaa32be,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3e6e3f4b5ff111dd1a8ac7df7c60d120ddc205a5b69aeeb209e487a8e405bf,PodSandboxId:4712c91e647ceaf8f356de2bbf7458284f050c978c523cfc8ad352aa21e1d4f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c766
0634,State:CONTAINER_RUNNING,CreatedAt:1759142208594419867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6959f01174e974ade40c3bfc16a814dfb166bdc4f86d4036198ad04d3c51951b,PodSandboxId:8bb4cbce3d4d8ab85fb40f35ec5dc3953224be17b5a81fa59525219e48857513,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759142208583538318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787caf5fb5ad1e85135ce6b6eed843c8946cc916e1dea741a37bf72c666360ad,PodSandboxId:28f527c20559fbf462b7e6f663362919ff57950165ff0336c8ad8d31761fb58f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,}
,Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759142208563010293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a55ba8fa0366e66f40d41eee3f65187820205a686f93eba5c1898309806407,PodSandboxId:4c4858d2471eff7566113f9c7c7352ad8f4b
ff95ac40341dc662526bed7fe51f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759142168428147313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0b6a8579d1174446415645dfdbe88cb1e73c10668c2e2916710fdd235bbffc,PodSandboxId:a573144cc0c0bc839091045888712713e75cb8309f6d89841
454c97d272220e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759142164814577260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4db117533d3015e6a9220366ff1ceefe1f010bcdf2f8570c4f
92873db68b73cd,PodSandboxId:9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759142164800803103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed5dd4c866f42be3e3de5673948bc41285cc1aa68080ca19b9cc4db61be112a,PodSandboxId:d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759142164772706634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683119da9f16b7dcd89cbe4f5b4cb2d5be00d01afd630abc745ea2e4a5909caa,PodSandboxId:a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759142160978984238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0581d84242ded174512127eb8e83baa3cfc5507e63f9a35e26d78ee58e66d0,PodSandboxId:e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759142160182540582,Lab
els:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=431105bb-c8d7-4000-a192-4a489b8c89b5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.871821776Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=011c30a5-d1f6-4677-93f2-cb8ace9acf30 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.871938642Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=011c30a5-d1f6-4677-93f2-cb8ace9acf30 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.873042913Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5148520e-355a-4425-81b2-a56b6171c98d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.873718680Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759142835873695574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201237,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5148520e-355a-4425-81b2-a56b6171c98d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.874435014Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad786cf9-a9b9-412b-af79-0dbb40b1eac2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.874491170Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad786cf9-a9b9-412b-af79-0dbb40b1eac2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.874758497Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bce2ed56f12be0eae070eadfc38e29f357517cad2fd6165ab487c6120688d9b,PodSandboxId:b441e09c6ef2d85697a1766bd612b4bc9f280229f01444a0a2bf5bce9cb85d1a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759142267674479435,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 818a1168-13eb-40e5-a11e-ed073c8ca85f,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cded293cdc57ead1f28a98da4250d1baf57a2d59a9a93f1d3ee2372dd051ef9b,PodSandboxId:edaa6178cac15b67b074c8c50398d9aad7f133dc6c25e535430bd7a0ce288991,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759142213671533532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c35b0ae21a19f783260f4ba368b53beb2fca7d75595d46d397886bc1018a11,PodSandboxId:6aaf8d34752c12e0e82b95635ab96099dccecef966383a32e03cc2511abd751b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759142213393501916,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2c5f49c9d29c0f3ad3c29c93e2fa675c3c78f618b93494189ca0e15d4171ad6,PodSandboxId:e090db4eef6fe986ce3cca0412b997b8f49d951f4538cae30223710ed8bb293b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759142213363502977,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4deb47b3c02873f7ea4b7a1d04550ee0b7d35c8ea854d454513b6e2cbf954c75,PodSandboxId:d03cfb836c6eb825dfd36aeff2559674feffef4d71647a4fa5e40841f7caa6d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759142208644604932,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012a907cd467a90f54ee8123eaaa32be,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3e6e3f4b5ff111dd1a8ac7df7c60d120ddc205a5b69aeeb209e487a8e405bf,PodSandboxId:4712c91e647ceaf8f356de2bbf7458284f050c978c523cfc8ad352aa21e1d4f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c766
0634,State:CONTAINER_RUNNING,CreatedAt:1759142208594419867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6959f01174e974ade40c3bfc16a814dfb166bdc4f86d4036198ad04d3c51951b,PodSandboxId:8bb4cbce3d4d8ab85fb40f35ec5dc3953224be17b5a81fa59525219e48857513,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759142208583538318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787caf5fb5ad1e85135ce6b6eed843c8946cc916e1dea741a37bf72c666360ad,PodSandboxId:28f527c20559fbf462b7e6f663362919ff57950165ff0336c8ad8d31761fb58f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,}
,Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759142208563010293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a55ba8fa0366e66f40d41eee3f65187820205a686f93eba5c1898309806407,PodSandboxId:4c4858d2471eff7566113f9c7c7352ad8f4b
ff95ac40341dc662526bed7fe51f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759142168428147313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0b6a8579d1174446415645dfdbe88cb1e73c10668c2e2916710fdd235bbffc,PodSandboxId:a573144cc0c0bc839091045888712713e75cb8309f6d89841
454c97d272220e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759142164814577260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4db117533d3015e6a9220366ff1ceefe1f010bcdf2f8570c4f
92873db68b73cd,PodSandboxId:9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759142164800803103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed5dd4c866f42be3e3de5673948bc41285cc1aa68080ca19b9cc4db61be112a,PodSandboxId:d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759142164772706634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683119da9f16b7dcd89cbe4f5b4cb2d5be00d01afd630abc745ea2e4a5909caa,PodSandboxId:a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759142160978984238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0581d84242ded174512127eb8e83baa3cfc5507e63f9a35e26d78ee58e66d0,PodSandboxId:e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759142160182540582,Lab
els:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad786cf9-a9b9-412b-af79-0dbb40b1eac2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.910790460Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a324c00a-182b-4523-b412-949f7483166e name=/runtime.v1.RuntimeService/Version
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.910880232Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a324c00a-182b-4523-b412-949f7483166e name=/runtime.v1.RuntimeService/Version
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.912154273Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82d473f9-faa1-4109-9afa-50544ed5f999 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.913492845Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759142835913468981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201237,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82d473f9-faa1-4109-9afa-50544ed5f999 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.914303533Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1fcf5df-bb7e-49a9-96b8-b60c6ebea053 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.914374910Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1fcf5df-bb7e-49a9-96b8-b60c6ebea053 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.914636891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bce2ed56f12be0eae070eadfc38e29f357517cad2fd6165ab487c6120688d9b,PodSandboxId:b441e09c6ef2d85697a1766bd612b4bc9f280229f01444a0a2bf5bce9cb85d1a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759142267674479435,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 818a1168-13eb-40e5-a11e-ed073c8ca85f,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cded293cdc57ead1f28a98da4250d1baf57a2d59a9a93f1d3ee2372dd051ef9b,PodSandboxId:edaa6178cac15b67b074c8c50398d9aad7f133dc6c25e535430bd7a0ce288991,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759142213671533532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c35b0ae21a19f783260f4ba368b53beb2fca7d75595d46d397886bc1018a11,PodSandboxId:6aaf8d34752c12e0e82b95635ab96099dccecef966383a32e03cc2511abd751b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759142213393501916,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2c5f49c9d29c0f3ad3c29c93e2fa675c3c78f618b93494189ca0e15d4171ad6,PodSandboxId:e090db4eef6fe986ce3cca0412b997b8f49d951f4538cae30223710ed8bb293b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759142213363502977,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4deb47b3c02873f7ea4b7a1d04550ee0b7d35c8ea854d454513b6e2cbf954c75,PodSandboxId:d03cfb836c6eb825dfd36aeff2559674feffef4d71647a4fa5e40841f7caa6d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759142208644604932,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012a907cd467a90f54ee8123eaaa32be,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3e6e3f4b5ff111dd1a8ac7df7c60d120ddc205a5b69aeeb209e487a8e405bf,PodSandboxId:4712c91e647ceaf8f356de2bbf7458284f050c978c523cfc8ad352aa21e1d4f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c766
0634,State:CONTAINER_RUNNING,CreatedAt:1759142208594419867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6959f01174e974ade40c3bfc16a814dfb166bdc4f86d4036198ad04d3c51951b,PodSandboxId:8bb4cbce3d4d8ab85fb40f35ec5dc3953224be17b5a81fa59525219e48857513,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759142208583538318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787caf5fb5ad1e85135ce6b6eed843c8946cc916e1dea741a37bf72c666360ad,PodSandboxId:28f527c20559fbf462b7e6f663362919ff57950165ff0336c8ad8d31761fb58f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,}
,Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759142208563010293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8a55ba8fa0366e66f40d41eee3f65187820205a686f93eba5c1898309806407,PodSandboxId:4c4858d2471eff7566113f9c7c7352ad8f4b
ff95ac40341dc662526bed7fe51f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759142168428147313,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3581457d-4db8-4128-a3eb-f27614ec4c96,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0b6a8579d1174446415645dfdbe88cb1e73c10668c2e2916710fdd235bbffc,PodSandboxId:a573144cc0c0bc839091045888712713e75cb8309f6d89841
454c97d272220e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759142164814577260,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1bee20c8d58d621b4427e7252264eba,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4db117533d3015e6a9220366ff1ceefe1f010bcdf2f8570c4f
92873db68b73cd,PodSandboxId:9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759142164800803103,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 221fcbdd73ebea579595982187f9964d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed5dd4c866f42be3e3de5673948bc41285cc1aa68080ca19b9cc4db61be112a,PodSandboxId:d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759142164772706634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-960153,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 180156b943983a6e5b8f074dd62185b8,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683119da9f16b7dcd89cbe4f5b4cb2d5be00d01afd630abc745ea2e4a5909caa,PodSandboxId:a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759142160978984238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ldskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c124297-4905-4a35-9473-4bd1b565e373,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort
\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0581d84242ded174512127eb8e83baa3cfc5507e63f9a35e26d78ee58e66d0,PodSandboxId:e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759142160182540582,Lab
els:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca0381-2478-4fd7-8b49-076c58cca999,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1fcf5df-bb7e-49a9-96b8-b60c6ebea053 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.941994582Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=d3ea5e44-619b-4439-9b63-69c62f6f09a6 name=/runtime.v1.RuntimeService/Version
	Sep 29 10:47:15 functional-960153 crio[5468]: time="2025-09-29 10:47:15.942064088Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d3ea5e44-619b-4439-9b63-69c62f6f09a6 name=/runtime.v1.RuntimeService/Version
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2bce2ed56f12b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Exited              mount-munger              0                   b441e09c6ef2d       busybox-mount
	cded293cdc57e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 minutes ago      Running             coredns                   2                   edaa6178cac15       coredns-66bc5c9577-ldskd
	32c35b0ae21a1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       3                   6aaf8d34752c1       storage-provisioner
	b2c5f49c9d29c       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      10 minutes ago      Running             kube-proxy                2                   e090db4eef6fe       kube-proxy-wmdfj
	4deb47b3c0287       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      10 minutes ago      Running             kube-apiserver            0                   d03cfb836c6eb       kube-apiserver-functional-960153
	5e3e6e3f4b5ff       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      10 minutes ago      Running             kube-controller-manager   3                   4712c91e647ce       kube-controller-manager-functional-960153
	6959f01174e97       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      10 minutes ago      Running             kube-scheduler            3                   8bb4cbce3d4d8       kube-scheduler-functional-960153
	787caf5fb5ad1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      3                   28f527c20559f       etcd-functional-960153
	c8a55ba8fa036       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Exited              storage-provisioner       2                   4c4858d2471ef       storage-provisioner
	1b0b6a8579d11       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago      Exited              etcd                      2                   a573144cc0c0b       etcd-functional-960153
	4db117533d301       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      11 minutes ago      Exited              kube-scheduler            2                   9c20e6f953a18       kube-scheduler-functional-960153
	1ed5dd4c866f4       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      11 minutes ago      Exited              kube-controller-manager   2                   d6b292ebe7e92       kube-controller-manager-functional-960153
	683119da9f16b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   1                   a203d4614c54e       coredns-66bc5c9577-ldskd
	2d0581d84242d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      11 minutes ago      Exited              kube-proxy                1                   e55c0fbe79eb9       kube-proxy-wmdfj
	
	
	==> coredns [683119da9f16b7dcd89cbe4f5b4cb2d5be00d01afd630abc745ea2e4a5909caa] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57545 - 19273 "HINFO IN 5553420383368812737.2946601077225657136. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.47213075s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cded293cdc57ead1f28a98da4250d1baf57a2d59a9a93f1d3ee2372dd051ef9b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59083 - 18141 "HINFO IN 5463811549496456981.4073937826615656044. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063852137s
	
	
	==> describe nodes <==
	Name:               functional-960153
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-960153
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c703192fb7638284bed1945941837d6f5d9e8170
	                    minikube.k8s.io/name=functional-960153
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_35_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:35:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-960153
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 10:47:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 10:47:05 +0000   Mon, 29 Sep 2025 10:35:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 10:47:05 +0000   Mon, 29 Sep 2025 10:35:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 10:47:05 +0000   Mon, 29 Sep 2025 10:35:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 10:47:05 +0000   Mon, 29 Sep 2025 10:35:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    functional-960153
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 9f88164a3a16454d87ec4803e7696424
	  System UUID:                9f88164a-3a16-454d-87ec-4803e7696424
	  Boot ID:                    52ac99b4-d685-43b7-aae7-7d644d51c516
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-6pbhb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  default                     hello-node-connect-7d85dfc575-rbtgs           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	  default                     mysql-5bb876957f-9bzpm                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 coredns-66bc5c9577-ldskd                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-960153                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-960153              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-960153     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-wmdfj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-960153              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-hbwbt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m21s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vfnm6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-960153 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-960153 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-960153 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeReady                12m                kubelet          Node functional-960153 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-960153 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-960153 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-960153 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node functional-960153 event: Registered Node functional-960153 in Controller
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-960153 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-960153 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-960153 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-960153 event: Registered Node functional-960153 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-960153 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-960153 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-960153 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-960153 event: Registered Node functional-960153 in Controller
	
	
	==> dmesg <==
	[  +0.009859] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.190779] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085188] kauditd_printk_skb: 1 callbacks suppressed
	[Sep29 10:35] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.138189] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.494668] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.966317] kauditd_printk_skb: 249 callbacks suppressed
	[ +20.548723] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.109948] kauditd_printk_skb: 11 callbacks suppressed
	[Sep29 10:36] kauditd_printk_skb: 337 callbacks suppressed
	[  +0.739809] kauditd_printk_skb: 93 callbacks suppressed
	[ +14.865249] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.108609] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.992645] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.562406] kauditd_printk_skb: 164 callbacks suppressed
	[Sep29 10:37] kauditd_printk_skb: 133 callbacks suppressed
	[  +2.047433] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.000167] kauditd_printk_skb: 68 callbacks suppressed
	[ +23.144420] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.146482] kauditd_printk_skb: 31 callbacks suppressed
	[Sep29 10:39] kauditd_printk_skb: 74 callbacks suppressed
	[Sep29 10:43] crun[9502]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +2.200543] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [1b0b6a8579d1174446415645dfdbe88cb1e73c10668c2e2916710fdd235bbffc] <==
	{"level":"warn","ts":"2025-09-29T10:36:07.224736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.233985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.242525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.254684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.269603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.279823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:07.391154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51468","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:36:33.464966Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T10:36:33.465048Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-960153","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.210:2380"],"advertise-client-urls":["https://192.168.39.210:2379"]}
	{"level":"error","ts":"2025-09-29T10:36:33.465141Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T10:36:33.541896Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T10:36:33.543548Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:36:33.543608Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"5a5dd032def1271d","current-leader-member-id":"5a5dd032def1271d"}
	{"level":"info","ts":"2025-09-29T10:36:33.543700Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-29T10:36:33.543743Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-29T10:36:33.543884Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T10:36:33.543974Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T10:36:33.543986Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T10:36:33.544026Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.210:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T10:36:33.544033Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.210:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T10:36:33.544039Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.210:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:36:33.547342Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.210:2380"}
	{"level":"error","ts":"2025-09-29T10:36:33.547405Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.210:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:36:33.547444Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.210:2380"}
	{"level":"info","ts":"2025-09-29T10:36:33.547452Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-960153","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.210:2380"],"advertise-client-urls":["https://192.168.39.210:2379"]}
	
	
	==> etcd [787caf5fb5ad1e85135ce6b6eed843c8946cc916e1dea741a37bf72c666360ad] <==
	{"level":"warn","ts":"2025-09-29T10:36:50.744043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.746453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.770274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.776634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.791479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.817968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.853822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.865483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.882955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.914507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.928089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.946754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.956789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.965927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:50.981978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.003127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.017773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.030414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.050869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.065427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.095802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:51.194410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60022","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:46:49.818915Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1064}
	{"level":"info","ts":"2025-09-29T10:46:49.850023Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1064,"took":"30.705306ms","hash":953322333,"current-db-size-bytes":3383296,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1536000,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-09-29T10:46:49.850067Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":953322333,"revision":1064,"compact-revision":-1}
	
	
	==> kernel <==
	 10:47:16 up 12 min,  0 users,  load average: 0.25, 0.26, 0.21
	Linux functional-960153 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4deb47b3c02873f7ea4b7a1d04550ee0b7d35c8ea854d454513b6e2cbf954c75] <==
	I0929 10:37:09.576805       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.227.93"}
	I0929 10:37:14.540431       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.250.229"}
	I0929 10:37:14.609964       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 10:37:23.840263       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.209.15"}
	I0929 10:37:54.198010       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:37:55.134922       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 10:37:55.418687       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.170.200"}
	I0929 10:37:55.448292       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.136.103"}
	I0929 10:38:04.450387       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:39:02.514133       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:39:06.401949       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:40:12.302143       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:40:34.124990       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:41:39.596788       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:41:40.694238       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:42:43.038044       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:42:49.727735       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:42:56.719384       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.130.23"}
	I0929 10:43:52.326619       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:43:52.867817       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:45:00.097752       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:45:17.561940       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:46:24.714344       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:46:26.911806       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:46:51.863467       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [1ed5dd4c866f42be3e3de5673948bc41285cc1aa68080ca19b9cc4db61be112a] <==
	I0929 10:36:11.372935       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 10:36:11.374520       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 10:36:11.378813       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 10:36:11.379357       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 10:36:11.383757       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 10:36:11.387079       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 10:36:11.390356       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 10:36:11.391517       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 10:36:11.393753       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 10:36:11.393858       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 10:36:11.394918       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:36:11.399254       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 10:36:11.399519       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 10:36:11.400457       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 10:36:11.403754       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 10:36:11.412038       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 10:36:11.412062       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 10:36:11.422408       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 10:36:11.422447       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 10:36:11.422640       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:36:11.422717       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 10:36:11.422741       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 10:36:11.422896       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 10:36:11.424512       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:36:11.425024       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	
	
	==> kube-controller-manager [5e3e6e3f4b5ff111dd1a8ac7df7c60d120ddc205a5b69aeeb209e487a8e405bf] <==
	I0929 10:36:55.304581       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 10:36:55.304597       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 10:36:55.306874       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 10:36:55.317098       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 10:36:55.317620       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 10:36:55.317734       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 10:36:55.318399       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 10:36:55.323169       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 10:36:55.323288       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 10:36:55.325615       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:36:55.325627       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 10:36:55.325633       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 10:36:55.326568       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 10:36:55.326655       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 10:36:55.326762       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-960153"
	I0929 10:36:55.326797       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 10:36:55.327156       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 10:36:55.331436       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	E0929 10:37:55.227948       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.251462       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.257857       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.265371       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.270669       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.275541       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:37:55.280040       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [2d0581d84242ded174512127eb8e83baa3cfc5507e63f9a35e26d78ee58e66d0] <==
	E0929 10:36:04.895610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-960153&limit=500&resourceVersion=0\": dial tcp 192.168.39.210:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0929 10:36:10.013346       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:36:10.013415       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.210"]
	E0929 10:36:10.013474       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:36:10.087391       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 10:36:10.087968       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 10:36:10.087999       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:36:10.117148       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:36:10.117840       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:36:10.117855       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:10.128465       1 config.go:200] "Starting service config controller"
	I0929 10:36:10.128494       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:36:10.128511       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:36:10.128515       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:36:10.128524       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:36:10.128526       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:36:10.128861       1 config.go:309] "Starting node config controller"
	I0929 10:36:10.136281       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:36:10.136290       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:36:10.229025       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:36:10.229073       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:36:10.229106       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b2c5f49c9d29c0f3ad3c29c93e2fa675c3c78f618b93494189ca0e15d4171ad6] <==
	I0929 10:36:53.971435       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:36:54.074882       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:36:54.078310       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.210"]
	E0929 10:36:54.082955       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:36:54.232412       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 10:36:54.232522       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 10:36:54.232545       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:36:54.300002       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:36:54.300332       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:36:54.300345       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:54.309617       1 config.go:200] "Starting service config controller"
	I0929 10:36:54.309844       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:36:54.310029       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:36:54.310175       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:36:54.310307       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:36:54.310396       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:36:54.322063       1 config.go:309] "Starting node config controller"
	I0929 10:36:54.358289       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:36:54.358327       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:36:54.411926       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:36:54.411968       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:36:54.412002       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4db117533d3015e6a9220366ff1ceefe1f010bcdf2f8570c4f92873db68b73cd] <==
	I0929 10:36:06.923917       1 serving.go:386] Generated self-signed cert in-memory
	I0929 10:36:08.191074       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 10:36:08.191117       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:08.214607       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0929 10:36:08.214735       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 10:36:08.214795       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:08.214817       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:08.214840       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:08.214855       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:08.216911       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 10:36:08.217318       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 10:36:08.316254       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:08.316542       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:08.317687       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0929 10:36:33.483990       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 10:36:33.488911       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 10:36:33.488954       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:33.488971       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:33.495440       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 10:36:33.495471       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 10:36:33.495508       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [6959f01174e974ade40c3bfc16a814dfb166bdc4f86d4036198ad04d3c51951b] <==
	I0929 10:36:49.551846       1 serving.go:386] Generated self-signed cert in-memory
	I0929 10:36:51.969129       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 10:36:51.969173       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:51.983643       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 10:36:51.983753       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0929 10:36:51.983782       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 10:36:51.983816       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 10:36:51.990405       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:51.990446       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:36:51.990462       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:51.990468       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:52.083915       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0929 10:36:52.090742       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 10:36:52.090863       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 10:46:26 functional-960153 kubelet[5808]: E0929 10:46:26.304620    5808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-rbtgs" podUID="76bcc9f3-165d-4de2-a963-90eb71d2cdfa"
	Sep 29 10:46:27 functional-960153 kubelet[5808]: E0929 10:46:27.948634    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142787948158436  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Sep 29 10:46:27 functional-960153 kubelet[5808]: E0929 10:46:27.948675    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142787948158436  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Sep 29 10:46:37 functional-960153 kubelet[5808]: E0929 10:46:37.675155    5808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-rbtgs" podUID="76bcc9f3-165d-4de2-a963-90eb71d2cdfa"
	Sep 29 10:46:37 functional-960153 kubelet[5808]: E0929 10:46:37.950333    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142797949850864  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Sep 29 10:46:37 functional-960153 kubelet[5808]: E0929 10:46:37.950358    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142797949850864  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Sep 29 10:46:47 functional-960153 kubelet[5808]: E0929 10:46:47.772174    5808 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod3eca0381-2478-4fd7-8b49-076c58cca999/crio-e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622: Error finding container e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622: Status 404 returned error can't find the container with id e55c0fbe79eb90190873b0229d04882052707fd81c6af59236c3ea676fbe6622
	Sep 29 10:46:47 functional-960153 kubelet[5808]: E0929 10:46:47.772880    5808 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod0c124297-4905-4a35-9473-4bd1b565e373/crio-a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba: Error finding container a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba: Status 404 returned error can't find the container with id a203d4614c54e26e5a589931b4d36de78fad0d892b7991e993c8600b853c8eba
	Sep 29 10:46:47 functional-960153 kubelet[5808]: E0929 10:46:47.773162    5808 manager.go:1116] Failed to create existing container: /kubepods/burstable/podd1bee20c8d58d621b4427e7252264eba/crio-a573144cc0c0bc839091045888712713e75cb8309f6d89841454c97d272220e0: Error finding container a573144cc0c0bc839091045888712713e75cb8309f6d89841454c97d272220e0: Status 404 returned error can't find the container with id a573144cc0c0bc839091045888712713e75cb8309f6d89841454c97d272220e0
	Sep 29 10:46:47 functional-960153 kubelet[5808]: E0929 10:46:47.773427    5808 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod221fcbdd73ebea579595982187f9964d/crio-9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766: Error finding container 9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766: Status 404 returned error can't find the container with id 9c20e6f953a181181161fedd4e77f9482753e049a003ea495ac7a11efebd5766
	Sep 29 10:46:47 functional-960153 kubelet[5808]: E0929 10:46:47.773823    5808 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod180156b943983a6e5b8f074dd62185b8/crio-d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360: Error finding container d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360: Status 404 returned error can't find the container with id d6b292ebe7e92d66f83cfe5483c11dbc296630f6ad1b6e494afc4d9b6aff4360
	Sep 29 10:46:47 functional-960153 kubelet[5808]: E0929 10:46:47.774031    5808 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod3581457d-4db8-4128-a3eb-f27614ec4c96/crio-4c4858d2471eff7566113f9c7c7352ad8f4bff95ac40341dc662526bed7fe51f: Error finding container 4c4858d2471eff7566113f9c7c7352ad8f4bff95ac40341dc662526bed7fe51f: Status 404 returned error can't find the container with id 4c4858d2471eff7566113f9c7c7352ad8f4bff95ac40341dc662526bed7fe51f
	Sep 29 10:46:47 functional-960153 kubelet[5808]: E0929 10:46:47.951947    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142807951548846  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Sep 29 10:46:47 functional-960153 kubelet[5808]: E0929 10:46:47.951997    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142807951548846  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Sep 29 10:46:49 functional-960153 kubelet[5808]: E0929 10:46:49.674405    5808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-rbtgs" podUID="76bcc9f3-165d-4de2-a963-90eb71d2cdfa"
	Sep 29 10:46:56 functional-960153 kubelet[5808]: E0929 10:46:56.960376    5808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 10:46:56 functional-960153 kubelet[5808]: E0929 10:46:56.960444    5808 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 10:46:56 functional-960153 kubelet[5808]: E0929 10:46:56.960641    5808 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-hbwbt_kubernetes-dashboard(4f05ae5d-538c-490e-a23d-d19f009ffb42): ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 10:46:56 functional-960153 kubelet[5808]: E0929 10:46:56.960671    5808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-hbwbt" podUID="4f05ae5d-538c-490e-a23d-d19f009ffb42"
	Sep 29 10:46:57 functional-960153 kubelet[5808]: E0929 10:46:57.953883    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142817953581954  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Sep 29 10:46:57 functional-960153 kubelet[5808]: E0929 10:46:57.953907    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142817953581954  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Sep 29 10:47:02 functional-960153 kubelet[5808]: E0929 10:47:02.674640    5808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-rbtgs" podUID="76bcc9f3-165d-4de2-a963-90eb71d2cdfa"
	Sep 29 10:47:07 functional-960153 kubelet[5808]: E0929 10:47:07.955805    5808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142827955473211  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Sep 29 10:47:07 functional-960153 kubelet[5808]: E0929 10:47:07.955828    5808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142827955473211  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Sep 29 10:47:11 functional-960153 kubelet[5808]: E0929 10:47:11.676724    5808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-hbwbt" podUID="4f05ae5d-538c-490e-a23d-d19f009ffb42"
	
	
	==> storage-provisioner [32c35b0ae21a19f783260f4ba368b53beb2fca7d75595d46d397886bc1018a11] <==
	W0929 10:46:52.208093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:46:54.211396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:46:54.216188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:46:56.219844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:46:56.225130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:46:58.230757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:46:58.235974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:00.239132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:00.248642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:02.252996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:02.259402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:04.262653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:04.267912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:06.272409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:06.280864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:08.284737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:08.290760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:10.293824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:10.299151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:12.302598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:12.307168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:14.311395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:14.316527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:16.319594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:16.324959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c8a55ba8fa0366e66f40d41eee3f65187820205a686f93eba5c1898309806407] <==
	I0929 10:36:08.500670       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 10:36:08.509633       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 10:36:08.509683       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0929 10:36:08.512109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:11.968139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:16.228764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:19.827667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:22.881892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:25.906036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:25.912008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 10:36:25.913089       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 10:36:25.913543       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"25412f40-1675-4ca1-a896-dcfa19247807", APIVersion:"v1", ResourceVersion:"538", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-960153_65f4ca3f-4720-4696-9b70-1b21f4e35fd1 became leader
	I0929 10:36:25.913623       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-960153_65f4ca3f-4720-4696-9b70-1b21f4e35fd1!
	W0929 10:36:25.921899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:25.932418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 10:36:26.013863       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-960153_65f4ca3f-4720-4696-9b70-1b21f4e35fd1!
	W0929 10:36:27.936027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:27.941131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:29.945646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:29.952337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:31.955062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:36:31.959717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-960153 -n functional-960153
helpers_test.go:269: (dbg) Run:  kubectl --context functional-960153 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-6pbhb hello-node-connect-7d85dfc575-rbtgs mysql-5bb876957f-9bzpm sp-pod dashboard-metrics-scraper-77bf4d6c4c-hbwbt kubernetes-dashboard-855c9754f9-vfnm6
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-960153 describe pod busybox-mount hello-node-75c85bcc94-6pbhb hello-node-connect-7d85dfc575-rbtgs mysql-5bb876957f-9bzpm sp-pod dashboard-metrics-scraper-77bf4d6c4c-hbwbt kubernetes-dashboard-855c9754f9-vfnm6
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-960153 describe pod busybox-mount hello-node-75c85bcc94-6pbhb hello-node-connect-7d85dfc575-rbtgs mysql-5bb876957f-9bzpm sp-pod dashboard-metrics-scraper-77bf4d6c4c-hbwbt kubernetes-dashboard-855c9754f9-vfnm6: exit status 1 (99.352235ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960153/192.168.39.210
	Start Time:       Mon, 29 Sep 2025 10:37:17 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://2bce2ed56f12be0eae070eadfc38e29f357517cad2fd6165ab487c6120688d9b
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 10:37:47 +0000
	      Finished:     Mon, 29 Sep 2025 10:37:47 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b7v9g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-b7v9g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m59s  default-scheduler  Successfully assigned default/busybox-mount to functional-960153
	  Normal  Pulling    10m    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m30s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.167s (29.766s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m30s  kubelet            Created container: mount-munger
	  Normal  Started    9m30s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-6pbhb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960153/192.168.39.210
	Start Time:       Mon, 29 Sep 2025 10:42:56 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zd7j6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zd7j6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m20s                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-6pbhb to functional-960153
	  Warning  Failed     96s                  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     96s                  kubelet            Error: ErrImagePull
	  Normal   BackOff    96s                  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     96s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    83s (x2 over 4m20s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-rbtgs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960153/192.168.39.210
	Start Time:       Mon, 29 Sep 2025 10:37:23 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zd4fw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zd4fw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  9m53s                default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-rbtgs to functional-960153
	  Warning  Failed     7m58s                kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     51s (x3 over 7m58s)  kubelet            Error: ErrImagePull
	  Warning  Failed     51s (x2 over 4m40s)  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    15s (x5 over 7m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     15s (x5 over 7m58s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    3s (x4 over 9m53s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-9bzpm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960153/192.168.39.210
	Start Time:       Mon, 29 Sep 2025 10:37:14 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ds57p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ds57p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-9bzpm to functional-960153
	  Warning  Failed     9m31s                  kubelet            Failed to pull image "docker.io/mysql:5.7": copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m56s                  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m38s (x3 over 9m31s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m38s                  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m10s (x4 over 9m30s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m10s (x4 over 9m30s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    117s (x4 over 10m)     kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-960153/192.168.39.210
	Start Time:       Mon, 29 Sep 2025 10:37:22 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5jh4w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-5jh4w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  9m55s                 default-scheduler  Successfully assigned default/sp-pod to functional-960153
	  Warning  Failed     8m29s                 kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m7s (x3 over 8m29s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m7s (x2 over 5m25s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    89s (x5 over 8m28s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     89s (x5 over 8m28s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    75s (x4 over 9m55s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-hbwbt" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vfnm6" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-960153 describe pod busybox-mount hello-node-75c85bcc94-6pbhb hello-node-connect-7d85dfc575-rbtgs mysql-5bb876957f-9bzpm sp-pod dashboard-metrics-scraper-77bf4d6c4c-hbwbt kubernetes-dashboard-855c9754f9-vfnm6: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-960153 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-960153 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-6pbhb" [1b020a16-1b00-476c-9224-36952996737f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-960153 -n functional-960153
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-29 10:52:56.996029601 +0000 UTC m=+2001.062105385
functional_test.go:1460: (dbg) Run:  kubectl --context functional-960153 describe po hello-node-75c85bcc94-6pbhb -n default
functional_test.go:1460: (dbg) kubectl --context functional-960153 describe po hello-node-75c85bcc94-6pbhb -n default:
Name:             hello-node-75c85bcc94-6pbhb
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-960153/192.168.39.210
Start Time:       Mon, 29 Sep 2025 10:42:56 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.13
IPs:
IP:           10.244.0.13
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zd7j6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-zd7j6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-75c85bcc94-6pbhb to functional-960153
Warning  Failed     4m14s (x2 over 7m16s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     56s (x3 over 7m16s)    kubelet            Error: ErrImagePull
Warning  Failed     56s                    kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    18s (x5 over 7m16s)    kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     18s (x5 over 7m16s)    kubelet            Error: ImagePullBackOff
Normal   Pulling    4s (x4 over 10m)       kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-960153 logs hello-node-75c85bcc94-6pbhb -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-960153 logs hello-node-75c85bcc94-6pbhb -n default: exit status 1 (68.477349ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-6pbhb" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-960153 logs hello-node-75c85bcc94-6pbhb -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960153 service --namespace=default --https --url hello-node: exit status 115 (285.826974ms)

                                                
                                                
-- stdout --
	https://192.168.39.210:30086
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-960153 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960153 service hello-node --url --format={{.IP}}: exit status 115 (287.638169ms)

                                                
                                                
-- stdout --
	192.168.39.210
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-960153 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960153 service hello-node --url: exit status 115 (289.966005ms)

                                                
                                                
-- stdout --
	http://192.168.39.210:30086
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-960153 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.210:30086
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestPreload (125.86s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-858390 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-858390 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m8.118291402s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-858390 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-858390 image pull gcr.io/k8s-minikube/busybox: (1.445378783s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-858390
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-858390: (6.902358454s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-858390 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 11:32:14.613659    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:32:25.042536    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-858390 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (46.369318902s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-858390 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-09-29 11:32:55.285049364 +0000 UTC m=+4399.351125138
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-858390 -n test-preload-858390
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-858390 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-858390 logs -n 25: (1.186505384s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-007768 ssh -n multinode-007768-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-007768     │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh     │ multinode-007768 ssh -n multinode-007768 sudo cat /home/docker/cp-test_multinode-007768-m03_multinode-007768.txt                                                                    │ multinode-007768     │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ cp      │ multinode-007768 cp multinode-007768-m03:/home/docker/cp-test.txt multinode-007768-m02:/home/docker/cp-test_multinode-007768-m03_multinode-007768-m02.txt                           │ multinode-007768     │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh     │ multinode-007768 ssh -n multinode-007768-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-007768     │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh     │ multinode-007768 ssh -n multinode-007768-m02 sudo cat /home/docker/cp-test_multinode-007768-m03_multinode-007768-m02.txt                                                            │ multinode-007768     │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ node    │ multinode-007768 node stop m03                                                                                                                                                      │ multinode-007768     │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ node    │ multinode-007768 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-007768     │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ node    │ list -p multinode-007768                                                                                                                                                            │ multinode-007768     │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ stop    │ -p multinode-007768                                                                                                                                                                 │ multinode-007768     │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:23 UTC │
	│ start   │ -p multinode-007768 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-007768     │ jenkins │ v1.37.0 │ 29 Sep 25 11:23 UTC │ 29 Sep 25 11:25 UTC │
	│ node    │ list -p multinode-007768                                                                                                                                                            │ multinode-007768     │ jenkins │ v1.37.0 │ 29 Sep 25 11:25 UTC │                     │
	│ node    │ multinode-007768 node delete m03                                                                                                                                                    │ multinode-007768     │ jenkins │ v1.37.0 │ 29 Sep 25 11:25 UTC │ 29 Sep 25 11:25 UTC │
	│ stop    │ multinode-007768 stop                                                                                                                                                               │ multinode-007768     │ jenkins │ v1.37.0 │ 29 Sep 25 11:25 UTC │ 29 Sep 25 11:28 UTC │
	│ start   │ -p multinode-007768 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-007768     │ jenkins │ v1.37.0 │ 29 Sep 25 11:28 UTC │ 29 Sep 25 11:30 UTC │
	│ node    │ list -p multinode-007768                                                                                                                                                            │ multinode-007768     │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ start   │ -p multinode-007768-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-007768-m02 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ start   │ -p multinode-007768-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-007768-m03 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ node    │ add -p multinode-007768                                                                                                                                                             │ multinode-007768     │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │                     │
	│ delete  │ -p multinode-007768-m03                                                                                                                                                             │ multinode-007768-m03 │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ delete  │ -p multinode-007768                                                                                                                                                                 │ multinode-007768     │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:30 UTC │
	│ start   │ -p test-preload-858390 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-858390  │ jenkins │ v1.37.0 │ 29 Sep 25 11:30 UTC │ 29 Sep 25 11:32 UTC │
	│ image   │ test-preload-858390 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-858390  │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │ 29 Sep 25 11:32 UTC │
	│ stop    │ -p test-preload-858390                                                                                                                                                              │ test-preload-858390  │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │ 29 Sep 25 11:32 UTC │
	│ start   │ -p test-preload-858390 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-858390  │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │ 29 Sep 25 11:32 UTC │
	│ image   │ test-preload-858390 image list                                                                                                                                                      │ test-preload-858390  │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │ 29 Sep 25 11:32 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:32:08
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:32:08.734127   46143 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:32:08.734232   46143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:32:08.734238   46143 out.go:374] Setting ErrFile to fd 2...
	I0929 11:32:08.734244   46143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:32:08.734497   46143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
	I0929 11:32:08.734994   46143 out.go:368] Setting JSON to false
	I0929 11:32:08.735879   46143 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4474,"bootTime":1759141055,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:32:08.735959   46143 start.go:140] virtualization: kvm guest
	I0929 11:32:08.737971   46143 out.go:179] * [test-preload-858390] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:32:08.739458   46143 notify.go:220] Checking for updates...
	I0929 11:32:08.739485   46143 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 11:32:08.741149   46143 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:32:08.742481   46143 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 11:32:08.743608   46143 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 11:32:08.744867   46143 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:32:08.746031   46143 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:32:08.747605   46143 config.go:182] Loaded profile config "test-preload-858390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0929 11:32:08.748028   46143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:32:08.748102   46143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:32:08.761170   46143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40821
	I0929 11:32:08.761698   46143 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:32:08.762368   46143 main.go:141] libmachine: Using API Version  1
	I0929 11:32:08.762416   46143 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:32:08.762807   46143 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:32:08.763038   46143 main.go:141] libmachine: (test-preload-858390) Calling .DriverName
	I0929 11:32:08.765127   46143 out.go:179] * Kubernetes 1.34.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.0
	I0929 11:32:08.766516   46143 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:32:08.766880   46143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:32:08.766920   46143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:32:08.779923   46143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45569
	I0929 11:32:08.780286   46143 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:32:08.780729   46143 main.go:141] libmachine: Using API Version  1
	I0929 11:32:08.780749   46143 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:32:08.781061   46143 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:32:08.781266   46143 main.go:141] libmachine: (test-preload-858390) Calling .DriverName
	I0929 11:32:08.814868   46143 out.go:179] * Using the kvm2 driver based on existing profile
	I0929 11:32:08.816112   46143 start.go:304] selected driver: kvm2
	I0929 11:32:08.816126   46143 start.go:924] validating driver "kvm2" against &{Name:test-preload-858390 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 C
lusterName:test-preload-858390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:32:08.816246   46143 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:32:08.817216   46143 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:32:08.817300   46143 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21657-3816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:32:08.830669   46143 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:32:08.830695   46143 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21657-3816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:32:08.843216   46143 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:32:08.843603   46143 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:32:08.843637   46143 cni.go:84] Creating CNI manager for ""
	I0929 11:32:08.843678   46143 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 11:32:08.843728   46143 start.go:348] cluster config:
	{Name:test-preload-858390 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-858390 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:32:08.843818   46143 iso.go:125] acquiring lock: {Name:mk6893cf08d5f5d64906f89556bbcb1c3b23df2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:32:08.846584   46143 out.go:179] * Starting "test-preload-858390" primary control-plane node in "test-preload-858390" cluster
	I0929 11:32:08.847746   46143 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0929 11:32:08.871891   46143 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0929 11:32:08.871919   46143 cache.go:58] Caching tarball of preloaded images
	I0929 11:32:08.872082   46143 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0929 11:32:08.873680   46143 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I0929 11:32:08.874775   46143 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 11:32:08.906225   46143 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0929 11:32:11.547969   46143 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 11:32:11.548076   46143 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 11:32:12.283432   46143 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0929 11:32:12.283560   46143 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/test-preload-858390/config.json ...
	I0929 11:32:12.283839   46143 start.go:360] acquireMachinesLock for test-preload-858390: {Name:mk5aa1ba007c5e25969fbfeac9bb0aa5318bfa89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0929 11:32:12.283904   46143 start.go:364] duration metric: took 41.619µs to acquireMachinesLock for "test-preload-858390"
	I0929 11:32:12.283925   46143 start.go:96] Skipping create...Using existing machine configuration
	I0929 11:32:12.283933   46143 fix.go:54] fixHost starting: 
	I0929 11:32:12.284219   46143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:32:12.284261   46143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:32:12.297018   46143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46563
	I0929 11:32:12.297722   46143 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:32:12.298258   46143 main.go:141] libmachine: Using API Version  1
	I0929 11:32:12.298282   46143 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:32:12.298629   46143 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:32:12.298878   46143 main.go:141] libmachine: (test-preload-858390) Calling .DriverName
	I0929 11:32:12.299028   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetState
	I0929 11:32:12.301251   46143 fix.go:112] recreateIfNeeded on test-preload-858390: state=Stopped err=<nil>
	I0929 11:32:12.301279   46143 main.go:141] libmachine: (test-preload-858390) Calling .DriverName
	W0929 11:32:12.301455   46143 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 11:32:12.303435   46143 out.go:252] * Restarting existing kvm2 VM for "test-preload-858390" ...
	I0929 11:32:12.303467   46143 main.go:141] libmachine: (test-preload-858390) Calling .Start
	I0929 11:32:12.303659   46143 main.go:141] libmachine: (test-preload-858390) starting domain...
	I0929 11:32:12.303689   46143 main.go:141] libmachine: (test-preload-858390) ensuring networks are active...
	I0929 11:32:12.304460   46143 main.go:141] libmachine: (test-preload-858390) Ensuring network default is active
	I0929 11:32:12.305125   46143 main.go:141] libmachine: (test-preload-858390) Ensuring network mk-test-preload-858390 is active
	I0929 11:32:12.305638   46143 main.go:141] libmachine: (test-preload-858390) getting domain XML...
	I0929 11:32:12.306857   46143 main.go:141] libmachine: (test-preload-858390) DBG | starting domain XML:
	I0929 11:32:12.306885   46143 main.go:141] libmachine: (test-preload-858390) DBG | <domain type='kvm'>
	I0929 11:32:12.306914   46143 main.go:141] libmachine: (test-preload-858390) DBG |   <name>test-preload-858390</name>
	I0929 11:32:12.306936   46143 main.go:141] libmachine: (test-preload-858390) DBG |   <uuid>1b4abc57-a827-452b-a5c8-ccba9a345fc3</uuid>
	I0929 11:32:12.306945   46143 main.go:141] libmachine: (test-preload-858390) DBG |   <memory unit='KiB'>3145728</memory>
	I0929 11:32:12.306952   46143 main.go:141] libmachine: (test-preload-858390) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I0929 11:32:12.306958   46143 main.go:141] libmachine: (test-preload-858390) DBG |   <vcpu placement='static'>2</vcpu>
	I0929 11:32:12.306963   46143 main.go:141] libmachine: (test-preload-858390) DBG |   <os>
	I0929 11:32:12.306969   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0929 11:32:12.306974   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <boot dev='cdrom'/>
	I0929 11:32:12.306980   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <boot dev='hd'/>
	I0929 11:32:12.306984   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <bootmenu enable='no'/>
	I0929 11:32:12.306992   46143 main.go:141] libmachine: (test-preload-858390) DBG |   </os>
	I0929 11:32:12.307002   46143 main.go:141] libmachine: (test-preload-858390) DBG |   <features>
	I0929 11:32:12.307007   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <acpi/>
	I0929 11:32:12.307011   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <apic/>
	I0929 11:32:12.307015   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <pae/>
	I0929 11:32:12.307019   46143 main.go:141] libmachine: (test-preload-858390) DBG |   </features>
	I0929 11:32:12.307052   46143 main.go:141] libmachine: (test-preload-858390) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0929 11:32:12.307077   46143 main.go:141] libmachine: (test-preload-858390) DBG |   <clock offset='utc'/>
	I0929 11:32:12.307090   46143 main.go:141] libmachine: (test-preload-858390) DBG |   <on_poweroff>destroy</on_poweroff>
	I0929 11:32:12.307106   46143 main.go:141] libmachine: (test-preload-858390) DBG |   <on_reboot>restart</on_reboot>
	I0929 11:32:12.307116   46143 main.go:141] libmachine: (test-preload-858390) DBG |   <on_crash>destroy</on_crash>
	I0929 11:32:12.307124   46143 main.go:141] libmachine: (test-preload-858390) DBG |   <devices>
	I0929 11:32:12.307136   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0929 11:32:12.307147   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <disk type='file' device='cdrom'>
	I0929 11:32:12.307157   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <driver name='qemu' type='raw'/>
	I0929 11:32:12.307173   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/test-preload-858390/boot2docker.iso'/>
	I0929 11:32:12.307189   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <target dev='hdc' bus='scsi'/>
	I0929 11:32:12.307199   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <readonly/>
	I0929 11:32:12.307212   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0929 11:32:12.307222   46143 main.go:141] libmachine: (test-preload-858390) DBG |     </disk>
	I0929 11:32:12.307231   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <disk type='file' device='disk'>
	I0929 11:32:12.307243   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0929 11:32:12.307261   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/test-preload-858390/test-preload-858390.rawdisk'/>
	I0929 11:32:12.307270   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <target dev='hda' bus='virtio'/>
	I0929 11:32:12.307288   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0929 11:32:12.307306   46143 main.go:141] libmachine: (test-preload-858390) DBG |     </disk>
	I0929 11:32:12.307317   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0929 11:32:12.307328   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0929 11:32:12.307334   46143 main.go:141] libmachine: (test-preload-858390) DBG |     </controller>
	I0929 11:32:12.307340   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0929 11:32:12.307346   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0929 11:32:12.307375   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0929 11:32:12.307388   46143 main.go:141] libmachine: (test-preload-858390) DBG |     </controller>
	I0929 11:32:12.307399   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <interface type='network'>
	I0929 11:32:12.307408   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <mac address='52:54:00:d4:b3:1d'/>
	I0929 11:32:12.307420   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <source network='mk-test-preload-858390'/>
	I0929 11:32:12.307429   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <model type='virtio'/>
	I0929 11:32:12.307434   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0929 11:32:12.307442   46143 main.go:141] libmachine: (test-preload-858390) DBG |     </interface>
	I0929 11:32:12.307458   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <interface type='network'>
	I0929 11:32:12.307474   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <mac address='52:54:00:30:e6:b2'/>
	I0929 11:32:12.307485   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <source network='default'/>
	I0929 11:32:12.307497   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <model type='virtio'/>
	I0929 11:32:12.307510   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0929 11:32:12.307520   46143 main.go:141] libmachine: (test-preload-858390) DBG |     </interface>
	I0929 11:32:12.307531   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <serial type='pty'>
	I0929 11:32:12.307545   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <target type='isa-serial' port='0'>
	I0929 11:32:12.307563   46143 main.go:141] libmachine: (test-preload-858390) DBG |         <model name='isa-serial'/>
	I0929 11:32:12.307575   46143 main.go:141] libmachine: (test-preload-858390) DBG |       </target>
	I0929 11:32:12.307583   46143 main.go:141] libmachine: (test-preload-858390) DBG |     </serial>
	I0929 11:32:12.307602   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <console type='pty'>
	I0929 11:32:12.307616   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <target type='serial' port='0'/>
	I0929 11:32:12.307627   46143 main.go:141] libmachine: (test-preload-858390) DBG |     </console>
	I0929 11:32:12.307640   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <input type='mouse' bus='ps2'/>
	I0929 11:32:12.307649   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <input type='keyboard' bus='ps2'/>
	I0929 11:32:12.307655   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <audio id='1' type='none'/>
	I0929 11:32:12.307668   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <memballoon model='virtio'>
	I0929 11:32:12.307682   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0929 11:32:12.307692   46143 main.go:141] libmachine: (test-preload-858390) DBG |     </memballoon>
	I0929 11:32:12.307702   46143 main.go:141] libmachine: (test-preload-858390) DBG |     <rng model='virtio'>
	I0929 11:32:12.307727   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <backend model='random'>/dev/random</backend>
	I0929 11:32:12.307788   46143 main.go:141] libmachine: (test-preload-858390) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0929 11:32:12.307808   46143 main.go:141] libmachine: (test-preload-858390) DBG |     </rng>
	I0929 11:32:12.307826   46143 main.go:141] libmachine: (test-preload-858390) DBG |   </devices>
	I0929 11:32:12.307838   46143 main.go:141] libmachine: (test-preload-858390) DBG | </domain>
	I0929 11:32:12.307850   46143 main.go:141] libmachine: (test-preload-858390) DBG | 
	I0929 11:32:13.644575   46143 main.go:141] libmachine: (test-preload-858390) waiting for domain to start...
	I0929 11:32:13.646025   46143 main.go:141] libmachine: (test-preload-858390) domain is now running
	I0929 11:32:13.646055   46143 main.go:141] libmachine: (test-preload-858390) waiting for IP...
	I0929 11:32:13.647038   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:13.647616   46143 main.go:141] libmachine: (test-preload-858390) found domain IP: 192.168.39.194
	I0929 11:32:13.647646   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has current primary IP address 192.168.39.194 and MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:13.647655   46143 main.go:141] libmachine: (test-preload-858390) reserving static IP address...
	I0929 11:32:13.648098   46143 main.go:141] libmachine: (test-preload-858390) DBG | found host DHCP lease matching {name: "test-preload-858390", mac: "52:54:00:d4:b3:1d", ip: "192.168.39.194"} in network mk-test-preload-858390: {Iface:virbr1 ExpiryTime:2025-09-29 12:31:08 +0000 UTC Type:0 Mac:52:54:00:d4:b3:1d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-858390 Clientid:01:52:54:00:d4:b3:1d}
	I0929 11:32:13.648116   46143 main.go:141] libmachine: (test-preload-858390) reserved static IP address 192.168.39.194 for domain test-preload-858390
	I0929 11:32:13.648135   46143 main.go:141] libmachine: (test-preload-858390) DBG | skip adding static IP to network mk-test-preload-858390 - found existing host DHCP lease matching {name: "test-preload-858390", mac: "52:54:00:d4:b3:1d", ip: "192.168.39.194"}
	I0929 11:32:13.648150   46143 main.go:141] libmachine: (test-preload-858390) DBG | Getting to WaitForSSH function...
	I0929 11:32:13.648163   46143 main.go:141] libmachine: (test-preload-858390) waiting for SSH...
	I0929 11:32:13.650554   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:13.650861   46143 main.go:141] libmachine: (test-preload-858390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:b3:1d", ip: ""} in network mk-test-preload-858390: {Iface:virbr1 ExpiryTime:2025-09-29 12:31:08 +0000 UTC Type:0 Mac:52:54:00:d4:b3:1d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-858390 Clientid:01:52:54:00:d4:b3:1d}
	I0929 11:32:13.650901   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined IP address 192.168.39.194 and MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:13.651018   46143 main.go:141] libmachine: (test-preload-858390) DBG | Using SSH client type: external
	I0929 11:32:13.651042   46143 main.go:141] libmachine: (test-preload-858390) DBG | Using SSH private key: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/test-preload-858390/id_rsa (-rw-------)
	I0929 11:32:13.651076   46143 main.go:141] libmachine: (test-preload-858390) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21657-3816/.minikube/machines/test-preload-858390/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0929 11:32:13.651094   46143 main.go:141] libmachine: (test-preload-858390) DBG | About to run SSH command:
	I0929 11:32:13.651116   46143 main.go:141] libmachine: (test-preload-858390) DBG | exit 0
	I0929 11:32:24.919011   46143 main.go:141] libmachine: (test-preload-858390) DBG | SSH cmd err, output: exit status 255: 
	I0929 11:32:24.919040   46143 main.go:141] libmachine: (test-preload-858390) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0929 11:32:24.919051   46143 main.go:141] libmachine: (test-preload-858390) DBG | command : exit 0
	I0929 11:32:24.919060   46143 main.go:141] libmachine: (test-preload-858390) DBG | err     : exit status 255
	I0929 11:32:24.919072   46143 main.go:141] libmachine: (test-preload-858390) DBG | output  : 
	I0929 11:32:27.919688   46143 main.go:141] libmachine: (test-preload-858390) DBG | Getting to WaitForSSH function...
	I0929 11:32:27.922708   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:27.923212   46143 main.go:141] libmachine: (test-preload-858390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:b3:1d", ip: ""} in network mk-test-preload-858390: {Iface:virbr1 ExpiryTime:2025-09-29 12:32:24 +0000 UTC Type:0 Mac:52:54:00:d4:b3:1d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-858390 Clientid:01:52:54:00:d4:b3:1d}
	I0929 11:32:27.923246   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined IP address 192.168.39.194 and MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:27.923398   46143 main.go:141] libmachine: (test-preload-858390) DBG | Using SSH client type: external
	I0929 11:32:27.923420   46143 main.go:141] libmachine: (test-preload-858390) DBG | Using SSH private key: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/test-preload-858390/id_rsa (-rw-------)
	I0929 11:32:27.923443   46143 main.go:141] libmachine: (test-preload-858390) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21657-3816/.minikube/machines/test-preload-858390/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0929 11:32:27.923452   46143 main.go:141] libmachine: (test-preload-858390) DBG | About to run SSH command:
	I0929 11:32:27.923471   46143 main.go:141] libmachine: (test-preload-858390) DBG | exit 0
	I0929 11:32:28.052600   46143 main.go:141] libmachine: (test-preload-858390) DBG | SSH cmd err, output: <nil>: 
	I0929 11:32:28.053208   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetConfigRaw
	I0929 11:32:28.053870   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetIP
	I0929 11:32:28.056707   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:28.057053   46143 main.go:141] libmachine: (test-preload-858390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:b3:1d", ip: ""} in network mk-test-preload-858390: {Iface:virbr1 ExpiryTime:2025-09-29 12:32:24 +0000 UTC Type:0 Mac:52:54:00:d4:b3:1d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-858390 Clientid:01:52:54:00:d4:b3:1d}
	I0929 11:32:28.057079   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined IP address 192.168.39.194 and MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:28.057333   46143 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/test-preload-858390/config.json ...
	I0929 11:32:28.057557   46143 machine.go:93] provisionDockerMachine start ...
	I0929 11:32:28.057575   46143 main.go:141] libmachine: (test-preload-858390) Calling .DriverName
	I0929 11:32:28.057778   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHHostname
	I0929 11:32:28.060213   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:28.060564   46143 main.go:141] libmachine: (test-preload-858390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:b3:1d", ip: ""} in network mk-test-preload-858390: {Iface:virbr1 ExpiryTime:2025-09-29 12:32:24 +0000 UTC Type:0 Mac:52:54:00:d4:b3:1d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-858390 Clientid:01:52:54:00:d4:b3:1d}
	I0929 11:32:28.060596   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined IP address 192.168.39.194 and MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:28.060729   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHPort
	I0929 11:32:28.060896   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHKeyPath
	I0929 11:32:28.061032   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHKeyPath
	I0929 11:32:28.061168   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHUsername
	I0929 11:32:28.061301   46143 main.go:141] libmachine: Using SSH client type: native
	I0929 11:32:28.061588   46143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0929 11:32:28.061601   46143 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 11:32:28.164409   46143 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0929 11:32:28.164439   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetMachineName
	I0929 11:32:28.164732   46143 buildroot.go:166] provisioning hostname "test-preload-858390"
	I0929 11:32:28.164764   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetMachineName
	I0929 11:32:28.164973   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHHostname
	I0929 11:32:28.168060   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:28.168504   46143 main.go:141] libmachine: (test-preload-858390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:b3:1d", ip: ""} in network mk-test-preload-858390: {Iface:virbr1 ExpiryTime:2025-09-29 12:32:24 +0000 UTC Type:0 Mac:52:54:00:d4:b3:1d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-858390 Clientid:01:52:54:00:d4:b3:1d}
	I0929 11:32:28.168530   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined IP address 192.168.39.194 and MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:28.168720   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHPort
	I0929 11:32:28.168918   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHKeyPath
	I0929 11:32:28.169058   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHKeyPath
	I0929 11:32:28.169204   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHUsername
	I0929 11:32:28.169340   46143 main.go:141] libmachine: Using SSH client type: native
	I0929 11:32:28.169553   46143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0929 11:32:28.169566   46143 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-858390 && echo "test-preload-858390" | sudo tee /etc/hostname
	I0929 11:32:28.290972   46143 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-858390
	
	I0929 11:32:28.290997   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHHostname
	I0929 11:32:28.293808   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:28.294214   46143 main.go:141] libmachine: (test-preload-858390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:b3:1d", ip: ""} in network mk-test-preload-858390: {Iface:virbr1 ExpiryTime:2025-09-29 12:32:24 +0000 UTC Type:0 Mac:52:54:00:d4:b3:1d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-858390 Clientid:01:52:54:00:d4:b3:1d}
	I0929 11:32:28.294246   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined IP address 192.168.39.194 and MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:28.294363   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHPort
	I0929 11:32:28.294588   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHKeyPath
	I0929 11:32:28.294752   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHKeyPath
	I0929 11:32:28.294878   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHUsername
	I0929 11:32:28.295032   46143 main.go:141] libmachine: Using SSH client type: native
	I0929 11:32:28.295269   46143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0929 11:32:28.295287   46143 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-858390' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-858390/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-858390' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 11:32:28.407398   46143 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:32:28.407430   46143 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21657-3816/.minikube CaCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21657-3816/.minikube}
	I0929 11:32:28.407457   46143 buildroot.go:174] setting up certificates
	I0929 11:32:28.407469   46143 provision.go:84] configureAuth start
	I0929 11:32:28.407486   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetMachineName
	I0929 11:32:28.407783   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetIP
	I0929 11:32:28.410572   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:28.410975   46143 main.go:141] libmachine: (test-preload-858390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:b3:1d", ip: ""} in network mk-test-preload-858390: {Iface:virbr1 ExpiryTime:2025-09-29 12:32:24 +0000 UTC Type:0 Mac:52:54:00:d4:b3:1d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-858390 Clientid:01:52:54:00:d4:b3:1d}
	I0929 11:32:28.411016   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined IP address 192.168.39.194 and MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:28.411188   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHHostname
	I0929 11:32:28.413686   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:28.414056   46143 main.go:141] libmachine: (test-preload-858390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:b3:1d", ip: ""} in network mk-test-preload-858390: {Iface:virbr1 ExpiryTime:2025-09-29 12:32:24 +0000 UTC Type:0 Mac:52:54:00:d4:b3:1d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-858390 Clientid:01:52:54:00:d4:b3:1d}
	I0929 11:32:28.414129   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined IP address 192.168.39.194 and MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:28.414230   46143 provision.go:143] copyHostCerts
	I0929 11:32:28.414277   46143 exec_runner.go:144] found /home/jenkins/minikube-integration/21657-3816/.minikube/ca.pem, removing ...
	I0929 11:32:28.414287   46143 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21657-3816/.minikube/ca.pem
	I0929 11:32:28.414386   46143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/ca.pem (1082 bytes)
	I0929 11:32:28.414509   46143 exec_runner.go:144] found /home/jenkins/minikube-integration/21657-3816/.minikube/cert.pem, removing ...
	I0929 11:32:28.414521   46143 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21657-3816/.minikube/cert.pem
	I0929 11:32:28.414564   46143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/cert.pem (1123 bytes)
	I0929 11:32:28.414647   46143 exec_runner.go:144] found /home/jenkins/minikube-integration/21657-3816/.minikube/key.pem, removing ...
	I0929 11:32:28.414657   46143 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21657-3816/.minikube/key.pem
	I0929 11:32:28.414694   46143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/key.pem (1679 bytes)
	I0929 11:32:28.414773   46143 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca-key.pem org=jenkins.test-preload-858390 san=[127.0.0.1 192.168.39.194 localhost minikube test-preload-858390]
	I0929 11:32:28.604215   46143 provision.go:177] copyRemoteCerts
	I0929 11:32:28.604283   46143 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 11:32:28.604305   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHHostname
	I0929 11:32:28.607173   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:28.607512   46143 main.go:141] libmachine: (test-preload-858390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:b3:1d", ip: ""} in network mk-test-preload-858390: {Iface:virbr1 ExpiryTime:2025-09-29 12:32:24 +0000 UTC Type:0 Mac:52:54:00:d4:b3:1d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-858390 Clientid:01:52:54:00:d4:b3:1d}
	I0929 11:32:28.607546   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined IP address 192.168.39.194 and MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:28.607783   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHPort
	I0929 11:32:28.607982   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHKeyPath
	I0929 11:32:28.608163   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHUsername
	I0929 11:32:28.608291   46143 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/test-preload-858390/id_rsa Username:docker}
	I0929 11:32:28.694253   46143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 11:32:28.725227   46143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 11:32:28.755107   46143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0929 11:32:28.784809   46143 provision.go:87] duration metric: took 377.324564ms to configureAuth
	I0929 11:32:28.784834   46143 buildroot.go:189] setting minikube options for container-runtime
	I0929 11:32:28.785049   46143 config.go:182] Loaded profile config "test-preload-858390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0929 11:32:28.785127   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHHostname
	I0929 11:32:28.788195   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:28.788568   46143 main.go:141] libmachine: (test-preload-858390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:b3:1d", ip: ""} in network mk-test-preload-858390: {Iface:virbr1 ExpiryTime:2025-09-29 12:32:24 +0000 UTC Type:0 Mac:52:54:00:d4:b3:1d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-858390 Clientid:01:52:54:00:d4:b3:1d}
	I0929 11:32:28.788594   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined IP address 192.168.39.194 and MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:28.788767   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHPort
	I0929 11:32:28.788950   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHKeyPath
	I0929 11:32:28.789098   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHKeyPath
	I0929 11:32:28.789263   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHUsername
	I0929 11:32:28.789428   46143 main.go:141] libmachine: Using SSH client type: native
	I0929 11:32:28.789689   46143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0929 11:32:28.789707   46143 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 11:32:29.037047   46143 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 11:32:29.037073   46143 machine.go:96] duration metric: took 979.502522ms to provisionDockerMachine
	I0929 11:32:29.037088   46143 start.go:293] postStartSetup for "test-preload-858390" (driver="kvm2")
	I0929 11:32:29.037102   46143 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 11:32:29.037122   46143 main.go:141] libmachine: (test-preload-858390) Calling .DriverName
	I0929 11:32:29.037435   46143 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 11:32:29.037472   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHHostname
	I0929 11:32:29.040521   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:29.040934   46143 main.go:141] libmachine: (test-preload-858390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:b3:1d", ip: ""} in network mk-test-preload-858390: {Iface:virbr1 ExpiryTime:2025-09-29 12:32:24 +0000 UTC Type:0 Mac:52:54:00:d4:b3:1d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-858390 Clientid:01:52:54:00:d4:b3:1d}
	I0929 11:32:29.040965   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined IP address 192.168.39.194 and MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:29.041093   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHPort
	I0929 11:32:29.041266   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHKeyPath
	I0929 11:32:29.041422   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHUsername
	I0929 11:32:29.041561   46143 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/test-preload-858390/id_rsa Username:docker}
	I0929 11:32:29.124854   46143 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 11:32:29.130025   46143 info.go:137] Remote host: Buildroot 2025.02
	I0929 11:32:29.130054   46143 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-3816/.minikube/addons for local assets ...
	I0929 11:32:29.130160   46143 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-3816/.minikube/files for local assets ...
	I0929 11:32:29.130273   46143 filesync.go:149] local asset: /home/jenkins/minikube-integration/21657-3816/.minikube/files/etc/ssl/certs/76912.pem -> 76912.pem in /etc/ssl/certs
	I0929 11:32:29.130410   46143 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 11:32:29.142628   46143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/files/etc/ssl/certs/76912.pem --> /etc/ssl/certs/76912.pem (1708 bytes)
	I0929 11:32:29.173392   46143 start.go:296] duration metric: took 136.287095ms for postStartSetup
	I0929 11:32:29.173434   46143 fix.go:56] duration metric: took 16.889500545s for fixHost
	I0929 11:32:29.173459   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHHostname
	I0929 11:32:29.176263   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:29.176612   46143 main.go:141] libmachine: (test-preload-858390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:b3:1d", ip: ""} in network mk-test-preload-858390: {Iface:virbr1 ExpiryTime:2025-09-29 12:32:24 +0000 UTC Type:0 Mac:52:54:00:d4:b3:1d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-858390 Clientid:01:52:54:00:d4:b3:1d}
	I0929 11:32:29.176638   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined IP address 192.168.39.194 and MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:29.176822   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHPort
	I0929 11:32:29.177005   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHKeyPath
	I0929 11:32:29.177196   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHKeyPath
	I0929 11:32:29.177331   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHUsername
	I0929 11:32:29.177509   46143 main.go:141] libmachine: Using SSH client type: native
	I0929 11:32:29.177692   46143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0929 11:32:29.177701   46143 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0929 11:32:29.281414   46143 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759145549.236068598
	
	I0929 11:32:29.281440   46143 fix.go:216] guest clock: 1759145549.236068598
	I0929 11:32:29.281449   46143 fix.go:229] Guest: 2025-09-29 11:32:29.236068598 +0000 UTC Remote: 2025-09-29 11:32:29.173439421 +0000 UTC m=+20.473430481 (delta=62.629177ms)
	I0929 11:32:29.281476   46143 fix.go:200] guest clock delta is within tolerance: 62.629177ms
	I0929 11:32:29.281484   46143 start.go:83] releasing machines lock for "test-preload-858390", held for 16.997566492s
	I0929 11:32:29.281511   46143 main.go:141] libmachine: (test-preload-858390) Calling .DriverName
	I0929 11:32:29.281786   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetIP
	I0929 11:32:29.285111   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:29.285528   46143 main.go:141] libmachine: (test-preload-858390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:b3:1d", ip: ""} in network mk-test-preload-858390: {Iface:virbr1 ExpiryTime:2025-09-29 12:32:24 +0000 UTC Type:0 Mac:52:54:00:d4:b3:1d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-858390 Clientid:01:52:54:00:d4:b3:1d}
	I0929 11:32:29.285550   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined IP address 192.168.39.194 and MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:29.285728   46143 main.go:141] libmachine: (test-preload-858390) Calling .DriverName
	I0929 11:32:29.286219   46143 main.go:141] libmachine: (test-preload-858390) Calling .DriverName
	I0929 11:32:29.286424   46143 main.go:141] libmachine: (test-preload-858390) Calling .DriverName
	I0929 11:32:29.286526   46143 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 11:32:29.286567   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHHostname
	I0929 11:32:29.286677   46143 ssh_runner.go:195] Run: cat /version.json
	I0929 11:32:29.286700   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHHostname
	I0929 11:32:29.289678   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:29.289817   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:29.290071   46143 main.go:141] libmachine: (test-preload-858390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:b3:1d", ip: ""} in network mk-test-preload-858390: {Iface:virbr1 ExpiryTime:2025-09-29 12:32:24 +0000 UTC Type:0 Mac:52:54:00:d4:b3:1d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-858390 Clientid:01:52:54:00:d4:b3:1d}
	I0929 11:32:29.290102   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined IP address 192.168.39.194 and MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:29.290236   46143 main.go:141] libmachine: (test-preload-858390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:b3:1d", ip: ""} in network mk-test-preload-858390: {Iface:virbr1 ExpiryTime:2025-09-29 12:32:24 +0000 UTC Type:0 Mac:52:54:00:d4:b3:1d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-858390 Clientid:01:52:54:00:d4:b3:1d}
	I0929 11:32:29.290268   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHPort
	I0929 11:32:29.290318   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined IP address 192.168.39.194 and MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:29.290453   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHKeyPath
	I0929 11:32:29.290532   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHPort
	I0929 11:32:29.290607   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHUsername
	I0929 11:32:29.290694   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHKeyPath
	I0929 11:32:29.290698   46143 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/test-preload-858390/id_rsa Username:docker}
	I0929 11:32:29.290835   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHUsername
	I0929 11:32:29.290948   46143 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/test-preload-858390/id_rsa Username:docker}
	I0929 11:32:29.393167   46143 ssh_runner.go:195] Run: systemctl --version
	I0929 11:32:29.399985   46143 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 11:32:29.547084   46143 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0929 11:32:29.554784   46143 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0929 11:32:29.554877   46143 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:32:29.574831   46143 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0929 11:32:29.574856   46143 start.go:495] detecting cgroup driver to use...
	I0929 11:32:29.574948   46143 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 11:32:29.594179   46143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:32:29.611688   46143 docker.go:218] disabling cri-docker service (if available) ...
	I0929 11:32:29.611760   46143 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 11:32:29.629458   46143 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 11:32:29.646183   46143 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 11:32:29.789967   46143 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 11:32:29.997369   46143 docker.go:234] disabling docker service ...
	I0929 11:32:29.997430   46143 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 11:32:30.014021   46143 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 11:32:30.029206   46143 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 11:32:30.188090   46143 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 11:32:30.327602   46143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 11:32:30.345196   46143 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:32:30.371448   46143 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0929 11:32:30.371509   46143 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:32:30.383527   46143 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 11:32:30.383579   46143 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:32:30.395947   46143 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:32:30.408365   46143 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:32:30.421051   46143 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 11:32:30.434455   46143 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:32:30.446939   46143 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:32:30.467206   46143 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:32:30.479953   46143 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 11:32:30.490523   46143 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0929 11:32:30.490582   46143 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0929 11:32:30.510557   46143 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 11:32:30.523308   46143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:32:30.664274   46143 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 11:32:30.772593   46143 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 11:32:30.772685   46143 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 11:32:30.778144   46143 start.go:563] Will wait 60s for crictl version
	I0929 11:32:30.778205   46143 ssh_runner.go:195] Run: which crictl
	I0929 11:32:30.782517   46143 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 11:32:30.828524   46143 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0929 11:32:30.828613   46143 ssh_runner.go:195] Run: crio --version
	I0929 11:32:30.857919   46143 ssh_runner.go:195] Run: crio --version
	I0929 11:32:30.890134   46143 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0929 11:32:30.891470   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetIP
	I0929 11:32:30.894206   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:30.894594   46143 main.go:141] libmachine: (test-preload-858390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:b3:1d", ip: ""} in network mk-test-preload-858390: {Iface:virbr1 ExpiryTime:2025-09-29 12:32:24 +0000 UTC Type:0 Mac:52:54:00:d4:b3:1d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-858390 Clientid:01:52:54:00:d4:b3:1d}
	I0929 11:32:30.894629   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined IP address 192.168.39.194 and MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:30.894894   46143 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0929 11:32:30.899531   46143 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:32:30.915088   46143 kubeadm.go:875] updating cluster {Name:test-preload-858390 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test
-preload-858390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 11:32:30.915193   46143 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0929 11:32:30.915248   46143 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:32:30.957320   46143 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0929 11:32:30.957423   46143 ssh_runner.go:195] Run: which lz4
	I0929 11:32:30.962171   46143 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0929 11:32:30.967177   46143 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0929 11:32:30.967205   46143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0929 11:32:32.495386   46143 crio.go:462] duration metric: took 1.533243672s to copy over tarball
	I0929 11:32:32.495453   46143 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0929 11:32:34.171710   46143 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.67622802s)
	I0929 11:32:34.171744   46143 crio.go:469] duration metric: took 1.676332911s to extract the tarball
	I0929 11:32:34.171751   46143 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0929 11:32:34.213207   46143 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:32:34.260116   46143 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 11:32:34.260146   46143 cache_images.go:85] Images are preloaded, skipping loading
	I0929 11:32:34.260155   46143 kubeadm.go:926] updating node { 192.168.39.194 8443 v1.32.0 crio true true} ...
	I0929 11:32:34.260255   46143 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-858390 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-858390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 11:32:34.260319   46143 ssh_runner.go:195] Run: crio config
	I0929 11:32:34.310044   46143 cni.go:84] Creating CNI manager for ""
	I0929 11:32:34.310065   46143 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 11:32:34.310074   46143 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 11:32:34.310092   46143 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-858390 NodeName:test-preload-858390 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 11:32:34.310194   46143 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-858390"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.194"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.194"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 11:32:34.310252   46143 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0929 11:32:34.322911   46143 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 11:32:34.322995   46143 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 11:32:34.334983   46143 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0929 11:32:34.355631   46143 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 11:32:34.376164   46143 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I0929 11:32:34.397118   46143 ssh_runner.go:195] Run: grep 192.168.39.194	control-plane.minikube.internal$ /etc/hosts
	I0929 11:32:34.401376   46143 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.194	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:32:34.415885   46143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:32:34.555441   46143 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:32:34.575583   46143 certs.go:68] Setting up /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/test-preload-858390 for IP: 192.168.39.194
	I0929 11:32:34.575608   46143 certs.go:194] generating shared ca certs ...
	I0929 11:32:34.575632   46143 certs.go:226] acquiring lock for ca certs: {Name:mk991a8b4541d4c7b4b7bab2e7dfb0450ec66a3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:32:34.575806   46143 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21657-3816/.minikube/ca.key
	I0929 11:32:34.575870   46143 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.key
	I0929 11:32:34.575884   46143 certs.go:256] generating profile certs ...
	I0929 11:32:34.575983   46143 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/test-preload-858390/client.key
	I0929 11:32:34.576090   46143 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/test-preload-858390/apiserver.key.d87a9da6
	I0929 11:32:34.576157   46143 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/test-preload-858390/proxy-client.key
	I0929 11:32:34.576315   46143 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/7691.pem (1338 bytes)
	W0929 11:32:34.576368   46143 certs.go:480] ignoring /home/jenkins/minikube-integration/21657-3816/.minikube/certs/7691_empty.pem, impossibly tiny 0 bytes
	I0929 11:32:34.576384   46143 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 11:32:34.576415   46143 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem (1082 bytes)
	I0929 11:32:34.576448   46143 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem (1123 bytes)
	I0929 11:32:34.576478   46143 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/key.pem (1679 bytes)
	I0929 11:32:34.576535   46143 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/files/etc/ssl/certs/76912.pem (1708 bytes)
	I0929 11:32:34.577416   46143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 11:32:34.617582   46143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 11:32:34.661383   46143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 11:32:34.694450   46143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 11:32:34.724115   46143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/test-preload-858390/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0929 11:32:34.754267   46143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/test-preload-858390/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 11:32:34.784125   46143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/test-preload-858390/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 11:32:34.814164   46143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/test-preload-858390/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 11:32:34.844296   46143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/files/etc/ssl/certs/76912.pem --> /usr/share/ca-certificates/76912.pem (1708 bytes)
	I0929 11:32:34.874159   46143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 11:32:34.903642   46143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/certs/7691.pem --> /usr/share/ca-certificates/7691.pem (1338 bytes)
	I0929 11:32:34.933073   46143 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 11:32:34.953763   46143 ssh_runner.go:195] Run: openssl version
	I0929 11:32:34.960312   46143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/76912.pem && ln -fs /usr/share/ca-certificates/76912.pem /etc/ssl/certs/76912.pem"
	I0929 11:32:34.973297   46143 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/76912.pem
	I0929 11:32:34.978745   46143 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 10:34 /usr/share/ca-certificates/76912.pem
	I0929 11:32:34.978800   46143 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/76912.pem
	I0929 11:32:34.986067   46143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/76912.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 11:32:34.998910   46143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 11:32:35.011922   46143 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:32:35.017162   46143 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:32:35.017222   46143 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:32:35.024419   46143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 11:32:35.037409   46143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7691.pem && ln -fs /usr/share/ca-certificates/7691.pem /etc/ssl/certs/7691.pem"
	I0929 11:32:35.050318   46143 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7691.pem
	I0929 11:32:35.055609   46143 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 10:34 /usr/share/ca-certificates/7691.pem
	I0929 11:32:35.055654   46143 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7691.pem
	I0929 11:32:35.062896   46143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7691.pem /etc/ssl/certs/51391683.0"
	I0929 11:32:35.075640   46143 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 11:32:35.080855   46143 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 11:32:35.088224   46143 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 11:32:35.095470   46143 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 11:32:35.102606   46143 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 11:32:35.109603   46143 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 11:32:35.116773   46143 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 11:32:35.123778   46143 kubeadm.go:392] StartCluster: {Name:test-preload-858390 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-pr
eload-858390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:32:35.123877   46143 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 11:32:35.123923   46143 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 11:32:35.162962   46143 cri.go:89] found id: ""
	I0929 11:32:35.163031   46143 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 11:32:35.175626   46143 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 11:32:35.175650   46143 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 11:32:35.175698   46143 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 11:32:35.187563   46143 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:32:35.188026   46143 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-858390" does not appear in /home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 11:32:35.188173   46143 kubeconfig.go:62] /home/jenkins/minikube-integration/21657-3816/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-858390" cluster setting kubeconfig missing "test-preload-858390" context setting]
	I0929 11:32:35.188516   46143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/kubeconfig: {Name:mka4c30ad2429731194076d58cd88072dc744e8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:32:35.189130   46143 kapi.go:59] client config for test-preload-858390: &rest.Config{Host:"https://192.168.39.194:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21657-3816/.minikube/profiles/test-preload-858390/client.crt", KeyFile:"/home/jenkins/minikube-integration/21657-3816/.minikube/profiles/test-preload-858390/client.key", CAFile:"/home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0929 11:32:35.189651   46143 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0929 11:32:35.189669   46143 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0929 11:32:35.189675   46143 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0929 11:32:35.189681   46143 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0929 11:32:35.189687   46143 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0929 11:32:35.190061   46143 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 11:32:35.201116   46143 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.39.194
	I0929 11:32:35.201148   46143 kubeadm.go:1152] stopping kube-system containers ...
	I0929 11:32:35.201161   46143 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0929 11:32:35.201204   46143 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 11:32:35.244301   46143 cri.go:89] found id: ""
	I0929 11:32:35.244395   46143 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0929 11:32:35.262886   46143 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 11:32:35.274911   46143 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 11:32:35.274929   46143 kubeadm.go:157] found existing configuration files:
	
	I0929 11:32:35.274985   46143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 11:32:35.285734   46143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 11:32:35.285787   46143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 11:32:35.296905   46143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 11:32:35.307484   46143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 11:32:35.307529   46143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 11:32:35.318801   46143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 11:32:35.329483   46143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 11:32:35.329539   46143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 11:32:35.341002   46143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 11:32:35.351517   46143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 11:32:35.351574   46143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 11:32:35.363128   46143 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 11:32:35.374548   46143 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:32:35.438370   46143 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:32:36.452320   46143 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.013909956s)
	I0929 11:32:36.452379   46143 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:32:36.714595   46143 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:32:36.807419   46143 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:32:36.880563   46143 api_server.go:52] waiting for apiserver process to appear ...
	I0929 11:32:36.880654   46143 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:32:37.381178   46143 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:32:37.880771   46143 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:32:38.381211   46143 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:32:38.881265   46143 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:32:39.380720   46143 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:32:39.409384   46143 api_server.go:72] duration metric: took 2.52882342s to wait for apiserver process to appear ...
	I0929 11:32:39.409408   46143 api_server.go:88] waiting for apiserver healthz status ...
	I0929 11:32:39.409431   46143 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0929 11:32:41.700859   46143 api_server.go:279] https://192.168.39.194:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0929 11:32:41.700897   46143 api_server.go:103] status: https://192.168.39.194:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0929 11:32:41.700919   46143 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0929 11:32:41.715231   46143 api_server.go:279] https://192.168.39.194:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0929 11:32:41.715253   46143 api_server.go:103] status: https://192.168.39.194:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0929 11:32:41.909605   46143 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0929 11:32:41.928764   46143 api_server.go:279] https://192.168.39.194:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 11:32:41.928791   46143 api_server.go:103] status: https://192.168.39.194:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 11:32:42.409874   46143 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0929 11:32:42.419874   46143 api_server.go:279] https://192.168.39.194:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 11:32:42.419906   46143 api_server.go:103] status: https://192.168.39.194:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 11:32:42.910489   46143 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0929 11:32:42.915606   46143 api_server.go:279] https://192.168.39.194:8443/healthz returned 200:
	ok
	I0929 11:32:42.921976   46143 api_server.go:141] control plane version: v1.32.0
	I0929 11:32:42.921998   46143 api_server.go:131] duration metric: took 3.512583656s to wait for apiserver health ...
	I0929 11:32:42.922007   46143 cni.go:84] Creating CNI manager for ""
	I0929 11:32:42.922012   46143 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 11:32:42.923851   46143 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 11:32:42.925167   46143 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 11:32:42.940325   46143 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 11:32:42.966942   46143 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 11:32:42.973224   46143 system_pods.go:59] 7 kube-system pods found
	I0929 11:32:42.973257   46143 system_pods.go:61] "coredns-668d6bf9bc-47v4r" [ab1e388c-5324-44f9-8394-76dca26d9211] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:32:42.973265   46143 system_pods.go:61] "etcd-test-preload-858390" [9ea8ee11-72bb-4f60-945d-db6cd02c9192] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 11:32:42.973273   46143 system_pods.go:61] "kube-apiserver-test-preload-858390" [dcf084fc-9c03-4c8e-8c57-ab280ffb7cb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 11:32:42.973278   46143 system_pods.go:61] "kube-controller-manager-test-preload-858390" [df2413ac-dcf6-4ff3-8547-f82f826598e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 11:32:42.973286   46143 system_pods.go:61] "kube-proxy-nbdv9" [42214060-54ae-4f98-a913-598a8e186dbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 11:32:42.973291   46143 system_pods.go:61] "kube-scheduler-test-preload-858390" [60a4bced-32c4-4cce-ab6a-7864d6987d08] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 11:32:42.973295   46143 system_pods.go:61] "storage-provisioner" [ec7e65c0-6df3-4d3a-8383-6b72b81dda94] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 11:32:42.973301   46143 system_pods.go:74] duration metric: took 6.338862ms to wait for pod list to return data ...
	I0929 11:32:42.973312   46143 node_conditions.go:102] verifying NodePressure condition ...
	I0929 11:32:42.976754   46143 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0929 11:32:42.976780   46143 node_conditions.go:123] node cpu capacity is 2
	I0929 11:32:42.976794   46143 node_conditions.go:105] duration metric: took 3.477262ms to run NodePressure ...
	I0929 11:32:42.976808   46143 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 11:32:43.266720   46143 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0929 11:32:43.270235   46143 kubeadm.go:735] kubelet initialised
	I0929 11:32:43.270260   46143 kubeadm.go:736] duration metric: took 3.514666ms waiting for restarted kubelet to initialise ...
	I0929 11:32:43.270277   46143 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 11:32:43.286118   46143 ops.go:34] apiserver oom_adj: -16
	I0929 11:32:43.286143   46143 kubeadm.go:593] duration metric: took 8.110486069s to restartPrimaryControlPlane
	I0929 11:32:43.286152   46143 kubeadm.go:394] duration metric: took 8.16238168s to StartCluster
	I0929 11:32:43.286167   46143 settings.go:142] acquiring lock: {Name:mkbd44ffc9a24198fd299896a4cba1c423a77e63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:32:43.286236   46143 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 11:32:43.286875   46143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3816/kubeconfig: {Name:mka4c30ad2429731194076d58cd88072dc744e8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:32:43.287097   46143 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 11:32:43.287190   46143 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 11:32:43.287265   46143 addons.go:69] Setting storage-provisioner=true in profile "test-preload-858390"
	I0929 11:32:43.287278   46143 addons.go:238] Setting addon storage-provisioner=true in "test-preload-858390"
	W0929 11:32:43.287288   46143 addons.go:247] addon storage-provisioner should already be in state true
	I0929 11:32:43.287311   46143 host.go:66] Checking if "test-preload-858390" exists ...
	I0929 11:32:43.287309   46143 addons.go:69] Setting default-storageclass=true in profile "test-preload-858390"
	I0929 11:32:43.287325   46143 config.go:182] Loaded profile config "test-preload-858390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0929 11:32:43.287341   46143 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-858390"
	I0929 11:32:43.287607   46143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:32:43.287637   46143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:32:43.287664   46143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:32:43.287694   46143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:32:43.289887   46143 out.go:179] * Verifying Kubernetes components...
	I0929 11:32:43.291431   46143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:32:43.301130   46143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0929 11:32:43.301601   46143 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:32:43.302153   46143 main.go:141] libmachine: Using API Version  1
	I0929 11:32:43.302179   46143 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:32:43.302183   46143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I0929 11:32:43.302570   46143 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:32:43.302622   46143 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:32:43.302821   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetState
	I0929 11:32:43.303021   46143 main.go:141] libmachine: Using API Version  1
	I0929 11:32:43.303049   46143 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:32:43.303385   46143 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:32:43.303920   46143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:32:43.303960   46143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:32:43.305434   46143 kapi.go:59] client config for test-preload-858390: &rest.Config{Host:"https://192.168.39.194:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21657-3816/.minikube/profiles/test-preload-858390/client.crt", KeyFile:"/home/jenkins/minikube-integration/21657-3816/.minikube/profiles/test-preload-858390/client.key", CAFile:"/home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0929 11:32:43.305842   46143 addons.go:238] Setting addon default-storageclass=true in "test-preload-858390"
	W0929 11:32:43.305867   46143 addons.go:247] addon default-storageclass should already be in state true
	I0929 11:32:43.305898   46143 host.go:66] Checking if "test-preload-858390" exists ...
	I0929 11:32:43.306277   46143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:32:43.306322   46143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:32:43.317061   46143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40125
	I0929 11:32:43.317617   46143 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:32:43.318096   46143 main.go:141] libmachine: Using API Version  1
	I0929 11:32:43.318117   46143 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:32:43.318446   46143 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:32:43.318617   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetState
	I0929 11:32:43.320465   46143 main.go:141] libmachine: (test-preload-858390) Calling .DriverName
	I0929 11:32:43.321998   46143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39985
	I0929 11:32:43.322383   46143 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:32:43.322408   46143 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 11:32:43.322741   46143 main.go:141] libmachine: Using API Version  1
	I0929 11:32:43.322767   46143 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:32:43.323173   46143 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:32:43.323635   46143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:32:43.323677   46143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:32:43.323755   46143 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:32:43.323781   46143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 11:32:43.323803   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHHostname
	I0929 11:32:43.327205   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:43.327720   46143 main.go:141] libmachine: (test-preload-858390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:b3:1d", ip: ""} in network mk-test-preload-858390: {Iface:virbr1 ExpiryTime:2025-09-29 12:32:24 +0000 UTC Type:0 Mac:52:54:00:d4:b3:1d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-858390 Clientid:01:52:54:00:d4:b3:1d}
	I0929 11:32:43.327752   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined IP address 192.168.39.194 and MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:43.327914   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHPort
	I0929 11:32:43.328099   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHKeyPath
	I0929 11:32:43.328258   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHUsername
	I0929 11:32:43.328414   46143 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/test-preload-858390/id_rsa Username:docker}
	I0929 11:32:43.342138   46143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37691
	I0929 11:32:43.342589   46143 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:32:43.343050   46143 main.go:141] libmachine: Using API Version  1
	I0929 11:32:43.343069   46143 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:32:43.343396   46143 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:32:43.343589   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetState
	I0929 11:32:43.345161   46143 main.go:141] libmachine: (test-preload-858390) Calling .DriverName
	I0929 11:32:43.345382   46143 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 11:32:43.345398   46143 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 11:32:43.345415   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHHostname
	I0929 11:32:43.348169   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:43.348622   46143 main.go:141] libmachine: (test-preload-858390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:b3:1d", ip: ""} in network mk-test-preload-858390: {Iface:virbr1 ExpiryTime:2025-09-29 12:32:24 +0000 UTC Type:0 Mac:52:54:00:d4:b3:1d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-858390 Clientid:01:52:54:00:d4:b3:1d}
	I0929 11:32:43.348646   46143 main.go:141] libmachine: (test-preload-858390) DBG | domain test-preload-858390 has defined IP address 192.168.39.194 and MAC address 52:54:00:d4:b3:1d in network mk-test-preload-858390
	I0929 11:32:43.348978   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHPort
	I0929 11:32:43.349156   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHKeyPath
	I0929 11:32:43.349289   46143 main.go:141] libmachine: (test-preload-858390) Calling .GetSSHUsername
	I0929 11:32:43.349448   46143 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/test-preload-858390/id_rsa Username:docker}
	I0929 11:32:43.498460   46143 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:32:43.518933   46143 node_ready.go:35] waiting up to 6m0s for node "test-preload-858390" to be "Ready" ...
	I0929 11:32:43.643128   46143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:32:43.646301   46143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 11:32:44.364149   46143 main.go:141] libmachine: Making call to close driver server
	I0929 11:32:44.364183   46143 main.go:141] libmachine: (test-preload-858390) Calling .Close
	I0929 11:32:44.364183   46143 main.go:141] libmachine: Making call to close driver server
	I0929 11:32:44.364200   46143 main.go:141] libmachine: (test-preload-858390) Calling .Close
	I0929 11:32:44.364504   46143 main.go:141] libmachine: (test-preload-858390) DBG | Closing plugin on server side
	I0929 11:32:44.364519   46143 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:32:44.364532   46143 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:32:44.364544   46143 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:32:44.364553   46143 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:32:44.364568   46143 main.go:141] libmachine: Making call to close driver server
	I0929 11:32:44.364575   46143 main.go:141] libmachine: (test-preload-858390) Calling .Close
	I0929 11:32:44.364577   46143 main.go:141] libmachine: Making call to close driver server
	I0929 11:32:44.364630   46143 main.go:141] libmachine: (test-preload-858390) Calling .Close
	I0929 11:32:44.364800   46143 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:32:44.364814   46143 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:32:44.364850   46143 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:32:44.364864   46143 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:32:44.364871   46143 main.go:141] libmachine: (test-preload-858390) DBG | Closing plugin on server side
	I0929 11:32:44.371329   46143 main.go:141] libmachine: Making call to close driver server
	I0929 11:32:44.371400   46143 main.go:141] libmachine: (test-preload-858390) Calling .Close
	I0929 11:32:44.371627   46143 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:32:44.371643   46143 main.go:141] libmachine: (test-preload-858390) DBG | Closing plugin on server side
	I0929 11:32:44.371649   46143 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:32:44.373702   46143 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0929 11:32:44.375339   46143 addons.go:514] duration metric: took 1.088165295s for enable addons: enabled=[storage-provisioner default-storageclass]
	W0929 11:32:45.522064   46143 node_ready.go:57] node "test-preload-858390" has "Ready":"False" status (will retry)
	W0929 11:32:47.522189   46143 node_ready.go:57] node "test-preload-858390" has "Ready":"False" status (will retry)
	W0929 11:32:49.522261   46143 node_ready.go:57] node "test-preload-858390" has "Ready":"False" status (will retry)
	W0929 11:32:51.523026   46143 node_ready.go:57] node "test-preload-858390" has "Ready":"False" status (will retry)
	I0929 11:32:52.522372   46143 node_ready.go:49] node "test-preload-858390" is "Ready"
	I0929 11:32:52.522398   46143 node_ready.go:38] duration metric: took 9.003435652s for node "test-preload-858390" to be "Ready" ...
	I0929 11:32:52.522412   46143 api_server.go:52] waiting for apiserver process to appear ...
	I0929 11:32:52.522471   46143 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:32:52.543125   46143 api_server.go:72] duration metric: took 9.255996241s to wait for apiserver process to appear ...
	I0929 11:32:52.543154   46143 api_server.go:88] waiting for apiserver healthz status ...
	I0929 11:32:52.543183   46143 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0929 11:32:52.549614   46143 api_server.go:279] https://192.168.39.194:8443/healthz returned 200:
	ok
	I0929 11:32:52.550650   46143 api_server.go:141] control plane version: v1.32.0
	I0929 11:32:52.550668   46143 api_server.go:131] duration metric: took 7.508133ms to wait for apiserver health ...
	I0929 11:32:52.550675   46143 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 11:32:52.554593   46143 system_pods.go:59] 7 kube-system pods found
	I0929 11:32:52.554615   46143 system_pods.go:61] "coredns-668d6bf9bc-47v4r" [ab1e388c-5324-44f9-8394-76dca26d9211] Running
	I0929 11:32:52.554620   46143 system_pods.go:61] "etcd-test-preload-858390" [9ea8ee11-72bb-4f60-945d-db6cd02c9192] Running
	I0929 11:32:52.554631   46143 system_pods.go:61] "kube-apiserver-test-preload-858390" [dcf084fc-9c03-4c8e-8c57-ab280ffb7cb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 11:32:52.554636   46143 system_pods.go:61] "kube-controller-manager-test-preload-858390" [df2413ac-dcf6-4ff3-8547-f82f826598e7] Running
	I0929 11:32:52.554643   46143 system_pods.go:61] "kube-proxy-nbdv9" [42214060-54ae-4f98-a913-598a8e186dbd] Running
	I0929 11:32:52.554646   46143 system_pods.go:61] "kube-scheduler-test-preload-858390" [60a4bced-32c4-4cce-ab6a-7864d6987d08] Running
	I0929 11:32:52.554649   46143 system_pods.go:61] "storage-provisioner" [ec7e65c0-6df3-4d3a-8383-6b72b81dda94] Running
	I0929 11:32:52.554654   46143 system_pods.go:74] duration metric: took 3.975223ms to wait for pod list to return data ...
	I0929 11:32:52.554661   46143 default_sa.go:34] waiting for default service account to be created ...
	I0929 11:32:52.557061   46143 default_sa.go:45] found service account: "default"
	I0929 11:32:52.557077   46143 default_sa.go:55] duration metric: took 2.411366ms for default service account to be created ...
	I0929 11:32:52.557083   46143 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 11:32:52.560321   46143 system_pods.go:86] 7 kube-system pods found
	I0929 11:32:52.560338   46143 system_pods.go:89] "coredns-668d6bf9bc-47v4r" [ab1e388c-5324-44f9-8394-76dca26d9211] Running
	I0929 11:32:52.560343   46143 system_pods.go:89] "etcd-test-preload-858390" [9ea8ee11-72bb-4f60-945d-db6cd02c9192] Running
	I0929 11:32:52.560348   46143 system_pods.go:89] "kube-apiserver-test-preload-858390" [dcf084fc-9c03-4c8e-8c57-ab280ffb7cb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 11:32:52.560362   46143 system_pods.go:89] "kube-controller-manager-test-preload-858390" [df2413ac-dcf6-4ff3-8547-f82f826598e7] Running
	I0929 11:32:52.560367   46143 system_pods.go:89] "kube-proxy-nbdv9" [42214060-54ae-4f98-a913-598a8e186dbd] Running
	I0929 11:32:52.560370   46143 system_pods.go:89] "kube-scheduler-test-preload-858390" [60a4bced-32c4-4cce-ab6a-7864d6987d08] Running
	I0929 11:32:52.560373   46143 system_pods.go:89] "storage-provisioner" [ec7e65c0-6df3-4d3a-8383-6b72b81dda94] Running
	I0929 11:32:52.560378   46143 system_pods.go:126] duration metric: took 3.291109ms to wait for k8s-apps to be running ...
	I0929 11:32:52.560384   46143 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 11:32:52.560425   46143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:32:52.578381   46143 system_svc.go:56] duration metric: took 17.989358ms WaitForService to wait for kubelet
	I0929 11:32:52.578405   46143 kubeadm.go:578] duration metric: took 9.291283955s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:32:52.578436   46143 node_conditions.go:102] verifying NodePressure condition ...
	I0929 11:32:52.581895   46143 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0929 11:32:52.581914   46143 node_conditions.go:123] node cpu capacity is 2
	I0929 11:32:52.581926   46143 node_conditions.go:105] duration metric: took 3.485831ms to run NodePressure ...
	I0929 11:32:52.581936   46143 start.go:241] waiting for startup goroutines ...
	I0929 11:32:52.581943   46143 start.go:246] waiting for cluster config update ...
	I0929 11:32:52.581952   46143 start.go:255] writing updated cluster config ...
	I0929 11:32:52.582212   46143 ssh_runner.go:195] Run: rm -f paused
	I0929 11:32:52.587633   46143 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:32:52.588070   46143 kapi.go:59] client config for test-preload-858390: &rest.Config{Host:"https://192.168.39.194:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21657-3816/.minikube/profiles/test-preload-858390/client.crt", KeyFile:"/home/jenkins/minikube-integration/21657-3816/.minikube/profiles/test-preload-858390/client.key", CAFile:"/home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0929 11:32:52.591338   46143 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-47v4r" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:32:52.596010   46143 pod_ready.go:94] pod "coredns-668d6bf9bc-47v4r" is "Ready"
	I0929 11:32:52.596030   46143 pod_ready.go:86] duration metric: took 4.651865ms for pod "coredns-668d6bf9bc-47v4r" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:32:52.598445   46143 pod_ready.go:83] waiting for pod "etcd-test-preload-858390" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:32:52.603547   46143 pod_ready.go:94] pod "etcd-test-preload-858390" is "Ready"
	I0929 11:32:52.603566   46143 pod_ready.go:86] duration metric: took 5.105131ms for pod "etcd-test-preload-858390" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:32:52.606613   46143 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-858390" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:32:53.612939   46143 pod_ready.go:94] pod "kube-apiserver-test-preload-858390" is "Ready"
	I0929 11:32:53.612970   46143 pod_ready.go:86] duration metric: took 1.006336708s for pod "kube-apiserver-test-preload-858390" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:32:53.615097   46143 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-858390" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:32:53.792000   46143 pod_ready.go:94] pod "kube-controller-manager-test-preload-858390" is "Ready"
	I0929 11:32:53.792024   46143 pod_ready.go:86] duration metric: took 176.901512ms for pod "kube-controller-manager-test-preload-858390" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:32:53.992383   46143 pod_ready.go:83] waiting for pod "kube-proxy-nbdv9" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:32:54.392709   46143 pod_ready.go:94] pod "kube-proxy-nbdv9" is "Ready"
	I0929 11:32:54.392735   46143 pod_ready.go:86] duration metric: took 400.328358ms for pod "kube-proxy-nbdv9" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:32:54.592697   46143 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-858390" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:32:54.991105   46143 pod_ready.go:94] pod "kube-scheduler-test-preload-858390" is "Ready"
	I0929 11:32:54.991131   46143 pod_ready.go:86] duration metric: took 398.400117ms for pod "kube-scheduler-test-preload-858390" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:32:54.991142   46143 pod_ready.go:40] duration metric: took 2.403485328s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:32:55.032132   46143 start.go:623] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I0929 11:32:55.033544   46143 out.go:203] 
	W0929 11:32:55.034936   46143 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I0929 11:32:55.036172   46143 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I0929 11:32:55.037490   46143 out.go:179] * Done! kubectl is now configured to use "test-preload-858390" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.000616533Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fbc62994-7813-43e1-b03f-02ec3725cbd3 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.002247027Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab4e7edf-eaf9-4628-9fc4-01262efb063d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.002742349Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759145576002720381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab4e7edf-eaf9-4628-9fc4-01262efb063d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.003342335Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b14ec002-1f1e-4348-b993-1bcabd3f13ed name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.003393939Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b14ec002-1f1e-4348-b993-1bcabd3f13ed name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.003604288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:edb5cf0b81cc0d502f9a7ca1d01a51c4cc538bd8b47ce2255e6268de4b979632,PodSandboxId:3ddd68a970ebb4044232732c5817e31e8f187526bf1c7dc75a412a36b12930c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759145569929828733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-47v4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1e388c-5324-44f9-8394-76dca26d9211,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:359117fd40cbb93b02b1b1ccd87bb7eee91b2bd104211fc99d939febaa5ce1d8,PodSandboxId:1cf5c044841b5c317aa4231660d220503a087cd139ad18c380f2bd647d19568f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759145562263540624,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nbdv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 42214060-54ae-4f98-a913-598a8e186dbd,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf09d8ab3d21e05c21456b65b3a5a22e6f3af77d614fc81c2dc34be6b879201,PodSandboxId:d9018d39fad9be7dd4f95d179586459d96a2cb192569417ed1a4017112bb32fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759145562264018455,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec
7e65c0-6df3-4d3a-8383-6b72b81dda94,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5323ea80b7f43b89002becda32bca4fc5249162df5840ba7fef75bfcb2c4952,PodSandboxId:4513b81cee831ba39814f3b4c479899fc4402b83ebac4cf54de1ae698f5a2a89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759145558909857002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-858390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcaf9d649
9383447b301ad20e0d13c9f,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6880bb433e2a07aaf305f4ce9192daf3d83b88faef0568b83e08bbff473e647d,PodSandboxId:3b69e89358ca4139c779fe81834e2b220eb211c1eed90a32d8daa28999e7b04e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759145558866760483,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-858390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be285ceaf13b051fa04236abff88a1c,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21cd50349c22b56fd0efccd80d5bc69770fd114dc4771135830817eaf8dfd506,PodSandboxId:245d29f8c0c1508cc4901ea0b495e596d5bc96b607ac3a2ca030efc886810e96,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759145558841157553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-858390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 212ede663014be6f2861fbb40ddba7e1,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e362e67e165c8904854a2353473a293041417cb1138c5637058b02adae3f98c,PodSandboxId:1fe130f4f949760da30fba5f02812184249fac4950111229e0b850006a9d5e62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759145558807555618,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-858390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c26a0c79e4d0e1a2fbaedf0bbbd0b7,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b14ec002-1f1e-4348-b993-1bcabd3f13ed name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.026457338Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=7babc46d-647d-4fb8-b7ad-875881bce540 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.027473448Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3ddd68a970ebb4044232732c5817e31e8f187526bf1c7dc75a412a36b12930c7,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-47v4r,Uid:ab1e388c-5324-44f9-8394-76dca26d9211,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759145569688771783,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-47v4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1e388c-5324-44f9-8394-76dca26d9211,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-29T11:32:41.826531626Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d9018d39fad9be7dd4f95d179586459d96a2cb192569417ed1a4017112bb32fc,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ec7e65c0-6df3-4d3a-8383-6b72b81dda94,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759145562139475425,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec7e65c0-6df3-4d3a-8383-6b72b81dda94,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-09-29T11:32:41.826529830Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1cf5c044841b5c317aa4231660d220503a087cd139ad18c380f2bd647d19568f,Metadata:&PodSandboxMetadata{Name:kube-proxy-nbdv9,Uid:42214060-54ae-4f98-a913-598a8e186dbd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759145562138694596,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nbdv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42214060-54ae-4f98-a913-598a8e186dbd,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-29T11:32:41.826478469Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1fe130f4f949760da30fba5f02812184249fac4950111229e0b850006a9d5e62,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-858390,Ui
d:e8c26a0c79e4d0e1a2fbaedf0bbbd0b7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759145558624708017,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-858390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c26a0c79e4d0e1a2fbaedf0bbbd0b7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e8c26a0c79e4d0e1a2fbaedf0bbbd0b7,kubernetes.io/config.seen: 2025-09-29T11:32:36.806659735Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4513b81cee831ba39814f3b4c479899fc4402b83ebac4cf54de1ae698f5a2a89,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-858390,Uid:fcaf9d6499383447b301ad20e0d13c9f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759145558619841026,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-858390,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcaf9d6499383447b301ad20e0d13c9f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.194:8443,kubernetes.io/config.hash: fcaf9d6499383447b301ad20e0d13c9f,kubernetes.io/config.seen: 2025-09-29T11:32:36.806666054Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:245d29f8c0c1508cc4901ea0b495e596d5bc96b607ac3a2ca030efc886810e96,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-858390,Uid:212ede663014be6f2861fbb40ddba7e1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759145558610191583,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-858390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 212ede663014be6f2861fbb40ddba7e1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 212ede663014be6f2861fbb40ddba7e1,kub
ernetes.io/config.seen: 2025-09-29T11:32:36.806663432Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3b69e89358ca4139c779fe81834e2b220eb211c1eed90a32d8daa28999e7b04e,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-858390,Uid:9be285ceaf13b051fa04236abff88a1c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759145558608130102,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-858390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be285ceaf13b051fa04236abff88a1c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.194:2379,kubernetes.io/config.hash: 9be285ceaf13b051fa04236abff88a1c,kubernetes.io/config.seen: 2025-09-29T11:32:36.844682443Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7babc46d-647d-4fb8-b7ad-875881bce540 name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.029534682Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc8f773f-59e8-4bcf-9b45-be15b72bc873 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.029590708Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc8f773f-59e8-4bcf-9b45-be15b72bc873 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.029753505Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:edb5cf0b81cc0d502f9a7ca1d01a51c4cc538bd8b47ce2255e6268de4b979632,PodSandboxId:3ddd68a970ebb4044232732c5817e31e8f187526bf1c7dc75a412a36b12930c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759145569929828733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-47v4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1e388c-5324-44f9-8394-76dca26d9211,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:359117fd40cbb93b02b1b1ccd87bb7eee91b2bd104211fc99d939febaa5ce1d8,PodSandboxId:1cf5c044841b5c317aa4231660d220503a087cd139ad18c380f2bd647d19568f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759145562263540624,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nbdv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 42214060-54ae-4f98-a913-598a8e186dbd,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf09d8ab3d21e05c21456b65b3a5a22e6f3af77d614fc81c2dc34be6b879201,PodSandboxId:d9018d39fad9be7dd4f95d179586459d96a2cb192569417ed1a4017112bb32fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759145562264018455,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec
7e65c0-6df3-4d3a-8383-6b72b81dda94,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5323ea80b7f43b89002becda32bca4fc5249162df5840ba7fef75bfcb2c4952,PodSandboxId:4513b81cee831ba39814f3b4c479899fc4402b83ebac4cf54de1ae698f5a2a89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759145558909857002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-858390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcaf9d649
9383447b301ad20e0d13c9f,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6880bb433e2a07aaf305f4ce9192daf3d83b88faef0568b83e08bbff473e647d,PodSandboxId:3b69e89358ca4139c779fe81834e2b220eb211c1eed90a32d8daa28999e7b04e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759145558866760483,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-858390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be285ceaf13b051fa04236abff88a1c,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21cd50349c22b56fd0efccd80d5bc69770fd114dc4771135830817eaf8dfd506,PodSandboxId:245d29f8c0c1508cc4901ea0b495e596d5bc96b607ac3a2ca030efc886810e96,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759145558841157553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-858390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 212ede663014be6f2861fbb40ddba7e1,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e362e67e165c8904854a2353473a293041417cb1138c5637058b02adae3f98c,PodSandboxId:1fe130f4f949760da30fba5f02812184249fac4950111229e0b850006a9d5e62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759145558807555618,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-858390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c26a0c79e4d0e1a2fbaedf0bbbd0b7,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc8f773f-59e8-4bcf-9b45-be15b72bc873 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.049224979Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d940b10a-6fe3-482e-8544-bcfef02a8c2b name=/runtime.v1.RuntimeService/Version
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.049293739Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d940b10a-6fe3-482e-8544-bcfef02a8c2b name=/runtime.v1.RuntimeService/Version
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.051176560Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db485f00-5f69-4f40-99f1-a825a0a9ebd1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.052014716Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759145576051990984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db485f00-5f69-4f40-99f1-a825a0a9ebd1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.052591780Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b924fa9-ce4f-4b99-8eb4-6e26b0e6b178 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.052683249Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b924fa9-ce4f-4b99-8eb4-6e26b0e6b178 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.052867066Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:edb5cf0b81cc0d502f9a7ca1d01a51c4cc538bd8b47ce2255e6268de4b979632,PodSandboxId:3ddd68a970ebb4044232732c5817e31e8f187526bf1c7dc75a412a36b12930c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759145569929828733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-47v4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1e388c-5324-44f9-8394-76dca26d9211,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:359117fd40cbb93b02b1b1ccd87bb7eee91b2bd104211fc99d939febaa5ce1d8,PodSandboxId:1cf5c044841b5c317aa4231660d220503a087cd139ad18c380f2bd647d19568f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759145562263540624,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nbdv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 42214060-54ae-4f98-a913-598a8e186dbd,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf09d8ab3d21e05c21456b65b3a5a22e6f3af77d614fc81c2dc34be6b879201,PodSandboxId:d9018d39fad9be7dd4f95d179586459d96a2cb192569417ed1a4017112bb32fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759145562264018455,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec
7e65c0-6df3-4d3a-8383-6b72b81dda94,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5323ea80b7f43b89002becda32bca4fc5249162df5840ba7fef75bfcb2c4952,PodSandboxId:4513b81cee831ba39814f3b4c479899fc4402b83ebac4cf54de1ae698f5a2a89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759145558909857002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-858390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcaf9d649
9383447b301ad20e0d13c9f,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6880bb433e2a07aaf305f4ce9192daf3d83b88faef0568b83e08bbff473e647d,PodSandboxId:3b69e89358ca4139c779fe81834e2b220eb211c1eed90a32d8daa28999e7b04e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759145558866760483,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-858390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be285ceaf13b051fa04236abff88a1c,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21cd50349c22b56fd0efccd80d5bc69770fd114dc4771135830817eaf8dfd506,PodSandboxId:245d29f8c0c1508cc4901ea0b495e596d5bc96b607ac3a2ca030efc886810e96,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759145558841157553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-858390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 212ede663014be6f2861fbb40ddba7e1,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e362e67e165c8904854a2353473a293041417cb1138c5637058b02adae3f98c,PodSandboxId:1fe130f4f949760da30fba5f02812184249fac4950111229e0b850006a9d5e62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759145558807555618,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-858390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c26a0c79e4d0e1a2fbaedf0bbbd0b7,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b924fa9-ce4f-4b99-8eb4-6e26b0e6b178 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.091879633Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8af67d0b-3f65-40d0-8b4a-c16926afc29c name=/runtime.v1.RuntimeService/Version
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.091953466Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8af67d0b-3f65-40d0-8b4a-c16926afc29c name=/runtime.v1.RuntimeService/Version
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.093769231Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a9a5967-fa7b-4894-9675-904aa53690a6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.094226269Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759145576094176184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a9a5967-fa7b-4894-9675-904aa53690a6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.094806103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ab3c01d-2c1f-4633-be47-cf06b4aae860 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.094873746Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ab3c01d-2c1f-4633-be47-cf06b4aae860 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:32:56 test-preload-858390 crio[831]: time="2025-09-29 11:32:56.095028617Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:edb5cf0b81cc0d502f9a7ca1d01a51c4cc538bd8b47ce2255e6268de4b979632,PodSandboxId:3ddd68a970ebb4044232732c5817e31e8f187526bf1c7dc75a412a36b12930c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759145569929828733,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-47v4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1e388c-5324-44f9-8394-76dca26d9211,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:359117fd40cbb93b02b1b1ccd87bb7eee91b2bd104211fc99d939febaa5ce1d8,PodSandboxId:1cf5c044841b5c317aa4231660d220503a087cd139ad18c380f2bd647d19568f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759145562263540624,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nbdv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 42214060-54ae-4f98-a913-598a8e186dbd,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcf09d8ab3d21e05c21456b65b3a5a22e6f3af77d614fc81c2dc34be6b879201,PodSandboxId:d9018d39fad9be7dd4f95d179586459d96a2cb192569417ed1a4017112bb32fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759145562264018455,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec
7e65c0-6df3-4d3a-8383-6b72b81dda94,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5323ea80b7f43b89002becda32bca4fc5249162df5840ba7fef75bfcb2c4952,PodSandboxId:4513b81cee831ba39814f3b4c479899fc4402b83ebac4cf54de1ae698f5a2a89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759145558909857002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-858390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcaf9d649
9383447b301ad20e0d13c9f,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6880bb433e2a07aaf305f4ce9192daf3d83b88faef0568b83e08bbff473e647d,PodSandboxId:3b69e89358ca4139c779fe81834e2b220eb211c1eed90a32d8daa28999e7b04e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759145558866760483,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-858390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be285ceaf13b051fa04236abff88a1c,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21cd50349c22b56fd0efccd80d5bc69770fd114dc4771135830817eaf8dfd506,PodSandboxId:245d29f8c0c1508cc4901ea0b495e596d5bc96b607ac3a2ca030efc886810e96,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759145558841157553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-858390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 212ede663014be6f2861fbb40ddba7e1,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e362e67e165c8904854a2353473a293041417cb1138c5637058b02adae3f98c,PodSandboxId:1fe130f4f949760da30fba5f02812184249fac4950111229e0b850006a9d5e62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759145558807555618,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-858390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c26a0c79e4d0e1a2fbaedf0bbbd0b7,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ab3c01d-2c1f-4633-be47-cf06b4aae860 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	edb5cf0b81cc0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   6 seconds ago       Running             coredns                   1                   3ddd68a970ebb       coredns-668d6bf9bc-47v4r
	fcf09d8ab3d21       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   d9018d39fad9b       storage-provisioner
	359117fd40cbb       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   13 seconds ago      Running             kube-proxy                1                   1cf5c044841b5       kube-proxy-nbdv9
	e5323ea80b7f4       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   17 seconds ago      Running             kube-apiserver            1                   4513b81cee831       kube-apiserver-test-preload-858390
	6880bb433e2a0       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   17 seconds ago      Running             etcd                      1                   3b69e89358ca4       etcd-test-preload-858390
	21cd50349c22b       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   17 seconds ago      Running             kube-scheduler            1                   245d29f8c0c15       kube-scheduler-test-preload-858390
	1e362e67e165c       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   17 seconds ago      Running             kube-controller-manager   1                   1fe130f4f9497       kube-controller-manager-test-preload-858390
	
	
	==> coredns [edb5cf0b81cc0d502f9a7ca1d01a51c4cc538bd8b47ce2255e6268de4b979632] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41902 - 57675 "HINFO IN 7583192488635635838.8143711698263831053. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.056375896s
	
	
	==> describe nodes <==
	Name:               test-preload-858390
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-858390
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c703192fb7638284bed1945941837d6f5d9e8170
	                    minikube.k8s.io/name=test-preload-858390
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_31_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:31:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-858390
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:32:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:32:52 +0000   Mon, 29 Sep 2025 11:31:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:32:52 +0000   Mon, 29 Sep 2025 11:31:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:32:52 +0000   Mon, 29 Sep 2025 11:31:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:32:52 +0000   Mon, 29 Sep 2025 11:32:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.194
	  Hostname:    test-preload-858390
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 1b4abc57a827452ba5c8ccba9a345fc3
	  System UUID:                1b4abc57-a827-452b-a5c8-ccba9a345fc3
	  Boot ID:                    0c608d52-a234-4716-aac1-d49f9f7d28b6
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-47v4r                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     67s
	  kube-system                 etcd-test-preload-858390                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         72s
	  kube-system                 kube-apiserver-test-preload-858390             250m (12%)    0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-controller-manager-test-preload-858390    200m (10%)    0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-proxy-nbdv9                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kube-scheduler-test-preload-858390             100m (5%)     0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 66s                kube-proxy       
	  Normal   Starting                 13s                kube-proxy       
	  Normal   NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node test-preload-858390 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node test-preload-858390 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     78s (x7 over 78s)  kubelet          Node test-preload-858390 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     72s                kubelet          Node test-preload-858390 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  72s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  72s                kubelet          Node test-preload-858390 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s                kubelet          Node test-preload-858390 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 72s                kubelet          Starting kubelet.
	  Normal   NodeReady                71s                kubelet          Node test-preload-858390 status is now: NodeReady
	  Normal   RegisteredNode           68s                node-controller  Node test-preload-858390 event: Registered Node test-preload-858390 in Controller
	  Normal   Starting                 20s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node test-preload-858390 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node test-preload-858390 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node test-preload-858390 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 15s                kubelet          Node test-preload-858390 has been rebooted, boot id: 0c608d52-a234-4716-aac1-d49f9f7d28b6
	  Normal   RegisteredNode           12s                node-controller  Node test-preload-858390 event: Registered Node test-preload-858390 in Controller
	
	
	==> dmesg <==
	[Sep29 11:32] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000051] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002206] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.015471] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000021] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.082648] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.094224] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.544016] kauditd_printk_skb: 177 callbacks suppressed
	[  +0.000051] kauditd_printk_skb: 128 callbacks suppressed
	[  +0.023271] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [6880bb433e2a07aaf305f4ce9192daf3d83b88faef0568b83e08bbff473e647d] <==
	{"level":"info","ts":"2025-09-29T11:32:39.268306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 switched to configuration voters=(13023703437973933201)"}
	{"level":"info","ts":"2025-09-29T11:32:39.275785Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bb2ce3d66f8fb721","local-member-id":"b4bd7d4638784c91","added-peer-id":"b4bd7d4638784c91","added-peer-peer-urls":["https://192.168.39.194:2380"]}
	{"level":"info","ts":"2025-09-29T11:32:39.275911Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb2ce3d66f8fb721","local-member-id":"b4bd7d4638784c91","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T11:32:39.276555Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T11:32:39.284540Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-29T11:32:39.284864Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"b4bd7d4638784c91","initial-advertise-peer-urls":["https://192.168.39.194:2380"],"listen-peer-urls":["https://192.168.39.194:2380"],"advertise-client-urls":["https://192.168.39.194:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.194:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-29T11:32:39.284912Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-29T11:32:39.284972Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.194:2380"}
	{"level":"info","ts":"2025-09-29T11:32:39.284988Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.194:2380"}
	{"level":"info","ts":"2025-09-29T11:32:40.531033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-29T11:32:40.531069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-29T11:32:40.531086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 received MsgPreVoteResp from b4bd7d4638784c91 at term 2"}
	{"level":"info","ts":"2025-09-29T11:32:40.531113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became candidate at term 3"}
	{"level":"info","ts":"2025-09-29T11:32:40.531119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 received MsgVoteResp from b4bd7d4638784c91 at term 3"}
	{"level":"info","ts":"2025-09-29T11:32:40.531130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became leader at term 3"}
	{"level":"info","ts":"2025-09-29T11:32:40.531145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b4bd7d4638784c91 elected leader b4bd7d4638784c91 at term 3"}
	{"level":"info","ts":"2025-09-29T11:32:40.532865Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"b4bd7d4638784c91","local-member-attributes":"{Name:test-preload-858390 ClientURLs:[https://192.168.39.194:2379]}","request-path":"/0/members/b4bd7d4638784c91/attributes","cluster-id":"bb2ce3d66f8fb721","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-29T11:32:40.532879Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T11:32:40.533143Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-29T11:32:40.533185Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-29T11:32:40.532897Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T11:32:40.533831Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-29T11:32:40.533969Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-29T11:32:40.534435Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-29T11:32:40.534643Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.194:2379"}
	
	
	==> kernel <==
	 11:32:56 up 0 min,  0 users,  load average: 0.70, 0.19, 0.06
	Linux test-preload-858390 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [e5323ea80b7f43b89002becda32bca4fc5249162df5840ba7fef75bfcb2c4952] <==
	I0929 11:32:41.733318       1 policy_source.go:240] refreshing policies
	I0929 11:32:41.754556       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0929 11:32:41.754628       1 aggregator.go:171] initial CRD sync complete...
	I0929 11:32:41.754635       1 autoregister_controller.go:144] Starting autoregister controller
	I0929 11:32:41.754641       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0929 11:32:41.754645       1 cache.go:39] Caches are synced for autoregister controller
	I0929 11:32:41.760139       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0929 11:32:41.763135       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0929 11:32:41.788867       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0929 11:32:41.806018       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0929 11:32:41.821426       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0929 11:32:41.821472       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0929 11:32:41.828860       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0929 11:32:41.828908       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0929 11:32:41.829122       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0929 11:32:41.862525       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0929 11:32:41.990867       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0929 11:32:42.618977       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0929 11:32:43.069671       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0929 11:32:43.111046       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0929 11:32:43.148812       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 11:32:43.155555       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 11:32:44.947080       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 11:32:45.288474       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0929 11:32:45.394085       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [1e362e67e165c8904854a2353473a293041417cb1138c5637058b02adae3f98c] <==
	I0929 11:32:44.945753       1 shared_informer.go:320] Caches are synced for disruption
	I0929 11:32:44.945954       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-858390"
	I0929 11:32:44.949268       1 shared_informer.go:320] Caches are synced for TTL
	I0929 11:32:44.957739       1 shared_informer.go:320] Caches are synced for attach detach
	I0929 11:32:44.958821       1 shared_informer.go:320] Caches are synced for garbage collector
	I0929 11:32:44.960600       1 shared_informer.go:320] Caches are synced for PVC protection
	I0929 11:32:44.963471       1 shared_informer.go:320] Caches are synced for endpoint
	I0929 11:32:44.968956       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0929 11:32:44.970151       1 shared_informer.go:320] Caches are synced for job
	I0929 11:32:44.972549       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0929 11:32:44.982720       1 shared_informer.go:320] Caches are synced for daemon sets
	I0929 11:32:44.983181       1 shared_informer.go:320] Caches are synced for crt configmap
	I0929 11:32:44.983434       1 shared_informer.go:320] Caches are synced for GC
	I0929 11:32:44.983667       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0929 11:32:44.985970       1 shared_informer.go:320] Caches are synced for stateful set
	I0929 11:32:44.986073       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0929 11:32:44.988636       1 shared_informer.go:320] Caches are synced for resource quota
	I0929 11:32:45.296393       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="361.905836ms"
	I0929 11:32:45.296474       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="53.033µs"
	I0929 11:32:50.027773       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="79.883µs"
	I0929 11:32:51.041869       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.777602ms"
	I0929 11:32:51.042732       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="128.274µs"
	I0929 11:32:52.087847       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-858390"
	I0929 11:32:52.100996       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-858390"
	I0929 11:32:54.918060       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [359117fd40cbb93b02b1b1ccd87bb7eee91b2bd104211fc99d939febaa5ce1d8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0929 11:32:42.474081       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0929 11:32:42.484675       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.194"]
	E0929 11:32:42.484769       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:32:42.522290       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0929 11:32:42.522321       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 11:32:42.522341       1 server_linux.go:170] "Using iptables Proxier"
	I0929 11:32:42.525656       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:32:42.526064       1 server.go:497] "Version info" version="v1.32.0"
	I0929 11:32:42.526097       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:32:42.527769       1 config.go:199] "Starting service config controller"
	I0929 11:32:42.527818       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0929 11:32:42.527844       1 config.go:105] "Starting endpoint slice config controller"
	I0929 11:32:42.527867       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0929 11:32:42.528353       1 config.go:329] "Starting node config controller"
	I0929 11:32:42.528382       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0929 11:32:42.628126       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0929 11:32:42.628174       1 shared_informer.go:320] Caches are synced for service config
	I0929 11:32:42.628806       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [21cd50349c22b56fd0efccd80d5bc69770fd114dc4771135830817eaf8dfd506] <==
	I0929 11:32:39.997033       1 serving.go:386] Generated self-signed cert in-memory
	W0929 11:32:41.669874       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 11:32:41.669979       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 11:32:41.670008       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 11:32:41.670019       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 11:32:41.742217       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I0929 11:32:41.742263       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:32:41.748898       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:32:41.748980       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0929 11:32:41.749576       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0929 11:32:41.749706       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 11:32:41.849968       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 29 11:32:41 test-preload-858390 kubelet[1153]: E0929 11:32:41.900141    1153 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-858390\" already exists" pod="kube-system/etcd-test-preload-858390"
	Sep 29 11:32:41 test-preload-858390 kubelet[1153]: I0929 11:32:41.920043    1153 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Sep 29 11:32:41 test-preload-858390 kubelet[1153]: I0929 11:32:41.958756    1153 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-858390"
	Sep 29 11:32:41 test-preload-858390 kubelet[1153]: I0929 11:32:41.959474    1153 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-858390"
	Sep 29 11:32:41 test-preload-858390 kubelet[1153]: I0929 11:32:41.959949    1153 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-858390"
	Sep 29 11:32:41 test-preload-858390 kubelet[1153]: I0929 11:32:41.960572    1153 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-858390"
	Sep 29 11:32:41 test-preload-858390 kubelet[1153]: E0929 11:32:41.973598    1153 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-858390\" already exists" pod="kube-system/kube-scheduler-test-preload-858390"
	Sep 29 11:32:41 test-preload-858390 kubelet[1153]: E0929 11:32:41.979101    1153 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-858390\" already exists" pod="kube-system/kube-controller-manager-test-preload-858390"
	Sep 29 11:32:41 test-preload-858390 kubelet[1153]: E0929 11:32:41.979370    1153 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-858390\" already exists" pod="kube-system/etcd-test-preload-858390"
	Sep 29 11:32:41 test-preload-858390 kubelet[1153]: E0929 11:32:41.979564    1153 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-858390\" already exists" pod="kube-system/kube-apiserver-test-preload-858390"
	Sep 29 11:32:41 test-preload-858390 kubelet[1153]: I0929 11:32:41.986036    1153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ec7e65c0-6df3-4d3a-8383-6b72b81dda94-tmp\") pod \"storage-provisioner\" (UID: \"ec7e65c0-6df3-4d3a-8383-6b72b81dda94\") " pod="kube-system/storage-provisioner"
	Sep 29 11:32:41 test-preload-858390 kubelet[1153]: I0929 11:32:41.986098    1153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42214060-54ae-4f98-a913-598a8e186dbd-xtables-lock\") pod \"kube-proxy-nbdv9\" (UID: \"42214060-54ae-4f98-a913-598a8e186dbd\") " pod="kube-system/kube-proxy-nbdv9"
	Sep 29 11:32:41 test-preload-858390 kubelet[1153]: I0929 11:32:41.986116    1153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42214060-54ae-4f98-a913-598a8e186dbd-lib-modules\") pod \"kube-proxy-nbdv9\" (UID: \"42214060-54ae-4f98-a913-598a8e186dbd\") " pod="kube-system/kube-proxy-nbdv9"
	Sep 29 11:32:41 test-preload-858390 kubelet[1153]: E0929 11:32:41.986876    1153 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 29 11:32:41 test-preload-858390 kubelet[1153]: E0929 11:32:41.986942    1153 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab1e388c-5324-44f9-8394-76dca26d9211-config-volume podName:ab1e388c-5324-44f9-8394-76dca26d9211 nodeName:}" failed. No retries permitted until 2025-09-29 11:32:42.486923248 +0000 UTC m=+5.816719028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ab1e388c-5324-44f9-8394-76dca26d9211-config-volume") pod "coredns-668d6bf9bc-47v4r" (UID: "ab1e388c-5324-44f9-8394-76dca26d9211") : object "kube-system"/"coredns" not registered
	Sep 29 11:32:42 test-preload-858390 kubelet[1153]: E0929 11:32:42.489838    1153 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 29 11:32:42 test-preload-858390 kubelet[1153]: E0929 11:32:42.489902    1153 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab1e388c-5324-44f9-8394-76dca26d9211-config-volume podName:ab1e388c-5324-44f9-8394-76dca26d9211 nodeName:}" failed. No retries permitted until 2025-09-29 11:32:43.489889435 +0000 UTC m=+6.819685227 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ab1e388c-5324-44f9-8394-76dca26d9211-config-volume") pod "coredns-668d6bf9bc-47v4r" (UID: "ab1e388c-5324-44f9-8394-76dca26d9211") : object "kube-system"/"coredns" not registered
	Sep 29 11:32:43 test-preload-858390 kubelet[1153]: E0929 11:32:43.497931    1153 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 29 11:32:43 test-preload-858390 kubelet[1153]: E0929 11:32:43.497995    1153 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab1e388c-5324-44f9-8394-76dca26d9211-config-volume podName:ab1e388c-5324-44f9-8394-76dca26d9211 nodeName:}" failed. No retries permitted until 2025-09-29 11:32:45.497982625 +0000 UTC m=+8.827778406 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ab1e388c-5324-44f9-8394-76dca26d9211-config-volume") pod "coredns-668d6bf9bc-47v4r" (UID: "ab1e388c-5324-44f9-8394-76dca26d9211") : object "kube-system"/"coredns" not registered
	Sep 29 11:32:43 test-preload-858390 kubelet[1153]: E0929 11:32:43.878364    1153 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-47v4r" podUID="ab1e388c-5324-44f9-8394-76dca26d9211"
	Sep 29 11:32:45 test-preload-858390 kubelet[1153]: E0929 11:32:45.513393    1153 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 29 11:32:45 test-preload-858390 kubelet[1153]: E0929 11:32:45.513538    1153 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab1e388c-5324-44f9-8394-76dca26d9211-config-volume podName:ab1e388c-5324-44f9-8394-76dca26d9211 nodeName:}" failed. No retries permitted until 2025-09-29 11:32:49.513474454 +0000 UTC m=+12.843270246 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ab1e388c-5324-44f9-8394-76dca26d9211-config-volume") pod "coredns-668d6bf9bc-47v4r" (UID: "ab1e388c-5324-44f9-8394-76dca26d9211") : object "kube-system"/"coredns" not registered
	Sep 29 11:32:45 test-preload-858390 kubelet[1153]: E0929 11:32:45.878226    1153 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-47v4r" podUID="ab1e388c-5324-44f9-8394-76dca26d9211"
	Sep 29 11:32:46 test-preload-858390 kubelet[1153]: E0929 11:32:46.874863    1153 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759145566874085703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 29 11:32:46 test-preload-858390 kubelet[1153]: E0929 11:32:46.874905    1153 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759145566874085703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [fcf09d8ab3d21e05c21456b65b3a5a22e6f3af77d614fc81c2dc34be6b879201] <==
	I0929 11:32:42.377391       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-858390 -n test-preload-858390
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-858390 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-858390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-858390
--- FAIL: TestPreload (125.86s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (75.96s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-869600 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-869600 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m9.717979473s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-869600] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21657
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-869600" primary control-plane node in "pause-869600" cluster
	* Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-869600" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:36:40.008097   49203 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:36:40.008487   49203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:36:40.008504   49203 out.go:374] Setting ErrFile to fd 2...
	I0929 11:36:40.008512   49203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:36:40.008810   49203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
	I0929 11:36:40.009237   49203 out.go:368] Setting JSON to false
	I0929 11:36:40.010219   49203 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4745,"bootTime":1759141055,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:36:40.010303   49203 start.go:140] virtualization: kvm guest
	I0929 11:36:40.012234   49203 out.go:179] * [pause-869600] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:36:40.013393   49203 notify.go:220] Checking for updates...
	I0929 11:36:40.013435   49203 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 11:36:40.014643   49203 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:36:40.015855   49203 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 11:36:40.017034   49203 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 11:36:40.018286   49203 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:36:40.019679   49203 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:36:40.021512   49203 config.go:182] Loaded profile config "pause-869600": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:36:40.022166   49203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:36:40.022242   49203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:36:40.040245   49203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I0929 11:36:40.040835   49203 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:36:40.041505   49203 main.go:141] libmachine: Using API Version  1
	I0929 11:36:40.041534   49203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:36:40.041937   49203 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:36:40.042186   49203 main.go:141] libmachine: (pause-869600) Calling .DriverName
	I0929 11:36:40.042486   49203 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:36:40.042851   49203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:36:40.042907   49203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:36:40.056578   49203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37655
	I0929 11:36:40.056972   49203 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:36:40.057423   49203 main.go:141] libmachine: Using API Version  1
	I0929 11:36:40.057446   49203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:36:40.057803   49203 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:36:40.058000   49203 main.go:141] libmachine: (pause-869600) Calling .DriverName
	I0929 11:36:40.162408   49203 out.go:179] * Using the kvm2 driver based on existing profile
	I0929 11:36:40.163694   49203 start.go:304] selected driver: kvm2
	I0929 11:36:40.163713   49203 start.go:924] validating driver "kvm2" against &{Name:pause-869600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterN
ame:pause-869600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:36:40.163878   49203 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:36:40.164306   49203 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:36:40.164418   49203 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21657-3816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:36:40.179897   49203 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:36:40.179938   49203 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21657-3816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:36:40.197930   49203 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:36:40.199128   49203 cni.go:84] Creating CNI manager for ""
	I0929 11:36:40.199226   49203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 11:36:40.199310   49203 start.go:348] cluster config:
	{Name:pause-869600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-869600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:36:40.199505   49203 iso.go:125] acquiring lock: {Name:mk6893cf08d5f5d64906f89556bbcb1c3b23df2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:36:40.203707   49203 out.go:179] * Starting "pause-869600" primary control-plane node in "pause-869600" cluster
	I0929 11:36:40.205009   49203 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:36:40.205064   49203 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 11:36:40.205078   49203 cache.go:58] Caching tarball of preloaded images
	I0929 11:36:40.205178   49203 preload.go:172] Found /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 11:36:40.205192   49203 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 11:36:40.205365   49203 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/pause-869600/config.json ...
	I0929 11:36:40.205648   49203 start.go:360] acquireMachinesLock for pause-869600: {Name:mk5aa1ba007c5e25969fbfeac9bb0aa5318bfa89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0929 11:36:56.674559   49203 start.go:364] duration metric: took 16.468876909s to acquireMachinesLock for "pause-869600"
	I0929 11:36:56.674607   49203 start.go:96] Skipping create...Using existing machine configuration
	I0929 11:36:56.674618   49203 fix.go:54] fixHost starting: 
	I0929 11:36:56.675092   49203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:36:56.675139   49203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:36:56.692574   49203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41535
	I0929 11:36:56.693049   49203 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:36:56.693544   49203 main.go:141] libmachine: Using API Version  1
	I0929 11:36:56.693569   49203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:36:56.694022   49203 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:36:56.694232   49203 main.go:141] libmachine: (pause-869600) Calling .DriverName
	I0929 11:36:56.694416   49203 main.go:141] libmachine: (pause-869600) Calling .GetState
	I0929 11:36:56.696440   49203 fix.go:112] recreateIfNeeded on pause-869600: state=Running err=<nil>
	W0929 11:36:56.696465   49203 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 11:36:56.698268   49203 out.go:252] * Updating the running kvm2 "pause-869600" VM ...
	I0929 11:36:56.698295   49203 machine.go:93] provisionDockerMachine start ...
	I0929 11:36:56.698312   49203 main.go:141] libmachine: (pause-869600) Calling .DriverName
	I0929 11:36:56.698531   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHHostname
	I0929 11:36:56.701757   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:36:56.702273   49203 main.go:141] libmachine: (pause-869600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:97:f1", ip: ""} in network mk-pause-869600: {Iface:virbr2 ExpiryTime:2025-09-29 12:35:30 +0000 UTC Type:0 Mac:52:54:00:db:97:f1 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-869600 Clientid:01:52:54:00:db:97:f1}
	I0929 11:36:56.702299   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined IP address 192.168.50.21 and MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:36:56.702542   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHPort
	I0929 11:36:56.702756   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHKeyPath
	I0929 11:36:56.702929   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHKeyPath
	I0929 11:36:56.703091   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHUsername
	I0929 11:36:56.703318   49203 main.go:141] libmachine: Using SSH client type: native
	I0929 11:36:56.703709   49203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0929 11:36:56.703739   49203 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 11:36:56.835385   49203 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-869600
	
	I0929 11:36:56.835420   49203 main.go:141] libmachine: (pause-869600) Calling .GetMachineName
	I0929 11:36:56.835703   49203 buildroot.go:166] provisioning hostname "pause-869600"
	I0929 11:36:56.835733   49203 main.go:141] libmachine: (pause-869600) Calling .GetMachineName
	I0929 11:36:56.835956   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHHostname
	I0929 11:36:56.840960   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:36:56.841530   49203 main.go:141] libmachine: (pause-869600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:97:f1", ip: ""} in network mk-pause-869600: {Iface:virbr2 ExpiryTime:2025-09-29 12:35:30 +0000 UTC Type:0 Mac:52:54:00:db:97:f1 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-869600 Clientid:01:52:54:00:db:97:f1}
	I0929 11:36:56.841580   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined IP address 192.168.50.21 and MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:36:56.841936   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHPort
	I0929 11:36:56.842208   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHKeyPath
	I0929 11:36:56.842402   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHKeyPath
	I0929 11:36:56.842565   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHUsername
	I0929 11:36:56.842785   49203 main.go:141] libmachine: Using SSH client type: native
	I0929 11:36:56.843109   49203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0929 11:36:56.843132   49203 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-869600 && echo "pause-869600" | sudo tee /etc/hostname
	I0929 11:36:56.994959   49203 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-869600
	
	I0929 11:36:56.994991   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHHostname
	I0929 11:36:56.999125   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:36:56.999606   49203 main.go:141] libmachine: (pause-869600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:97:f1", ip: ""} in network mk-pause-869600: {Iface:virbr2 ExpiryTime:2025-09-29 12:35:30 +0000 UTC Type:0 Mac:52:54:00:db:97:f1 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-869600 Clientid:01:52:54:00:db:97:f1}
	I0929 11:36:56.999638   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined IP address 192.168.50.21 and MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:36:56.999928   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHPort
	I0929 11:36:57.000133   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHKeyPath
	I0929 11:36:57.000375   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHKeyPath
	I0929 11:36:57.000527   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHUsername
	I0929 11:36:57.000698   49203 main.go:141] libmachine: Using SSH client type: native
	I0929 11:36:57.000980   49203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0929 11:36:57.001016   49203 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-869600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-869600/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-869600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 11:36:57.130946   49203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:36:57.130991   49203 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21657-3816/.minikube CaCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21657-3816/.minikube}
	I0929 11:36:57.131115   49203 buildroot.go:174] setting up certificates
	I0929 11:36:57.131132   49203 provision.go:84] configureAuth start
	I0929 11:36:57.131157   49203 main.go:141] libmachine: (pause-869600) Calling .GetMachineName
	I0929 11:36:57.131548   49203 main.go:141] libmachine: (pause-869600) Calling .GetIP
	I0929 11:36:57.134987   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:36:57.135534   49203 main.go:141] libmachine: (pause-869600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:97:f1", ip: ""} in network mk-pause-869600: {Iface:virbr2 ExpiryTime:2025-09-29 12:35:30 +0000 UTC Type:0 Mac:52:54:00:db:97:f1 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-869600 Clientid:01:52:54:00:db:97:f1}
	I0929 11:36:57.135564   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined IP address 192.168.50.21 and MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:36:57.135799   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHHostname
	I0929 11:36:57.138836   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:36:57.139328   49203 main.go:141] libmachine: (pause-869600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:97:f1", ip: ""} in network mk-pause-869600: {Iface:virbr2 ExpiryTime:2025-09-29 12:35:30 +0000 UTC Type:0 Mac:52:54:00:db:97:f1 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-869600 Clientid:01:52:54:00:db:97:f1}
	I0929 11:36:57.139368   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined IP address 192.168.50.21 and MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:36:57.139596   49203 provision.go:143] copyHostCerts
	I0929 11:36:57.139688   49203 exec_runner.go:144] found /home/jenkins/minikube-integration/21657-3816/.minikube/ca.pem, removing ...
	I0929 11:36:57.139707   49203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21657-3816/.minikube/ca.pem
	I0929 11:36:57.139806   49203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/ca.pem (1082 bytes)
	I0929 11:36:57.140026   49203 exec_runner.go:144] found /home/jenkins/minikube-integration/21657-3816/.minikube/cert.pem, removing ...
	I0929 11:36:57.140045   49203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21657-3816/.minikube/cert.pem
	I0929 11:36:57.140088   49203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/cert.pem (1123 bytes)
	I0929 11:36:57.140197   49203 exec_runner.go:144] found /home/jenkins/minikube-integration/21657-3816/.minikube/key.pem, removing ...
	I0929 11:36:57.140206   49203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21657-3816/.minikube/key.pem
	I0929 11:36:57.140240   49203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/key.pem (1679 bytes)
	I0929 11:36:57.140320   49203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca-key.pem org=jenkins.pause-869600 san=[127.0.0.1 192.168.50.21 localhost minikube pause-869600]
	I0929 11:36:57.206818   49203 provision.go:177] copyRemoteCerts
	I0929 11:36:57.206881   49203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 11:36:57.206905   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHHostname
	I0929 11:36:57.210243   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:36:57.210802   49203 main.go:141] libmachine: (pause-869600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:97:f1", ip: ""} in network mk-pause-869600: {Iface:virbr2 ExpiryTime:2025-09-29 12:35:30 +0000 UTC Type:0 Mac:52:54:00:db:97:f1 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-869600 Clientid:01:52:54:00:db:97:f1}
	I0929 11:36:57.210831   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined IP address 192.168.50.21 and MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:36:57.211150   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHPort
	I0929 11:36:57.211427   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHKeyPath
	I0929 11:36:57.211620   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHUsername
	I0929 11:36:57.211867   49203 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/pause-869600/id_rsa Username:docker}
	I0929 11:36:57.316634   49203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 11:36:57.366278   49203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 11:36:57.426487   49203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 11:36:57.470702   49203 provision.go:87] duration metric: took 339.554958ms to configureAuth
	I0929 11:36:57.470739   49203 buildroot.go:189] setting minikube options for container-runtime
	I0929 11:36:57.471080   49203 config.go:182] Loaded profile config "pause-869600": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:36:57.471161   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHHostname
	I0929 11:36:57.474785   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:36:57.475283   49203 main.go:141] libmachine: (pause-869600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:97:f1", ip: ""} in network mk-pause-869600: {Iface:virbr2 ExpiryTime:2025-09-29 12:35:30 +0000 UTC Type:0 Mac:52:54:00:db:97:f1 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-869600 Clientid:01:52:54:00:db:97:f1}
	I0929 11:36:57.475335   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined IP address 192.168.50.21 and MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:36:57.475504   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHPort
	I0929 11:36:57.475711   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHKeyPath
	I0929 11:36:57.475897   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHKeyPath
	I0929 11:36:57.476058   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHUsername
	I0929 11:36:57.476224   49203 main.go:141] libmachine: Using SSH client type: native
	I0929 11:36:57.476548   49203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0929 11:36:57.476575   49203 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 11:37:05.164674   49203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 11:37:05.164724   49203 machine.go:96] duration metric: took 8.466397148s to provisionDockerMachine
	I0929 11:37:05.164739   49203 start.go:293] postStartSetup for "pause-869600" (driver="kvm2")
	I0929 11:37:05.164753   49203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 11:37:05.164780   49203 main.go:141] libmachine: (pause-869600) Calling .DriverName
	I0929 11:37:05.165105   49203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 11:37:05.165155   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHHostname
	I0929 11:37:05.168546   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:37:05.169061   49203 main.go:141] libmachine: (pause-869600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:97:f1", ip: ""} in network mk-pause-869600: {Iface:virbr2 ExpiryTime:2025-09-29 12:35:30 +0000 UTC Type:0 Mac:52:54:00:db:97:f1 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-869600 Clientid:01:52:54:00:db:97:f1}
	I0929 11:37:05.169090   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined IP address 192.168.50.21 and MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:37:05.169272   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHPort
	I0929 11:37:05.169468   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHKeyPath
	I0929 11:37:05.169644   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHUsername
	I0929 11:37:05.169808   49203 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/pause-869600/id_rsa Username:docker}
	I0929 11:37:05.262430   49203 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 11:37:05.267473   49203 info.go:137] Remote host: Buildroot 2025.02
	I0929 11:37:05.267494   49203 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-3816/.minikube/addons for local assets ...
	I0929 11:37:05.267544   49203 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-3816/.minikube/files for local assets ...
	I0929 11:37:05.267620   49203 filesync.go:149] local asset: /home/jenkins/minikube-integration/21657-3816/.minikube/files/etc/ssl/certs/76912.pem -> 76912.pem in /etc/ssl/certs
	I0929 11:37:05.267711   49203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 11:37:05.283592   49203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/files/etc/ssl/certs/76912.pem --> /etc/ssl/certs/76912.pem (1708 bytes)
	I0929 11:37:05.320413   49203 start.go:296] duration metric: took 155.659537ms for postStartSetup
	I0929 11:37:05.320450   49203 fix.go:56] duration metric: took 8.645832967s for fixHost
	I0929 11:37:05.320474   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHHostname
	I0929 11:37:05.323414   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:37:05.323802   49203 main.go:141] libmachine: (pause-869600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:97:f1", ip: ""} in network mk-pause-869600: {Iface:virbr2 ExpiryTime:2025-09-29 12:35:30 +0000 UTC Type:0 Mac:52:54:00:db:97:f1 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-869600 Clientid:01:52:54:00:db:97:f1}
	I0929 11:37:05.323829   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined IP address 192.168.50.21 and MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:37:05.324023   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHPort
	I0929 11:37:05.324231   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHKeyPath
	I0929 11:37:05.324397   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHKeyPath
	I0929 11:37:05.324527   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHUsername
	I0929 11:37:05.324735   49203 main.go:141] libmachine: Using SSH client type: native
	I0929 11:37:05.325009   49203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0929 11:37:05.325023   49203 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0929 11:37:05.461313   49203 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759145825.454300288
	
	I0929 11:37:05.461335   49203 fix.go:216] guest clock: 1759145825.454300288
	I0929 11:37:05.461345   49203 fix.go:229] Guest: 2025-09-29 11:37:05.454300288 +0000 UTC Remote: 2025-09-29 11:37:05.32045451 +0000 UTC m=+25.352272032 (delta=133.845778ms)
	I0929 11:37:05.461402   49203 fix.go:200] guest clock delta is within tolerance: 133.845778ms
	I0929 11:37:05.461412   49203 start.go:83] releasing machines lock for "pause-869600", held for 8.786825888s
	I0929 11:37:05.461443   49203 main.go:141] libmachine: (pause-869600) Calling .DriverName
	I0929 11:37:05.461687   49203 main.go:141] libmachine: (pause-869600) Calling .GetIP
	I0929 11:37:05.465267   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:37:05.465788   49203 main.go:141] libmachine: (pause-869600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:97:f1", ip: ""} in network mk-pause-869600: {Iface:virbr2 ExpiryTime:2025-09-29 12:35:30 +0000 UTC Type:0 Mac:52:54:00:db:97:f1 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-869600 Clientid:01:52:54:00:db:97:f1}
	I0929 11:37:05.465820   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined IP address 192.168.50.21 and MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:37:05.466070   49203 main.go:141] libmachine: (pause-869600) Calling .DriverName
	I0929 11:37:05.466691   49203 main.go:141] libmachine: (pause-869600) Calling .DriverName
	I0929 11:37:05.466871   49203 main.go:141] libmachine: (pause-869600) Calling .DriverName
	I0929 11:37:05.466973   49203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 11:37:05.467024   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHHostname
	I0929 11:37:05.467100   49203 ssh_runner.go:195] Run: cat /version.json
	I0929 11:37:05.467163   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHHostname
	I0929 11:37:05.470672   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:37:05.470876   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:37:05.471265   49203 main.go:141] libmachine: (pause-869600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:97:f1", ip: ""} in network mk-pause-869600: {Iface:virbr2 ExpiryTime:2025-09-29 12:35:30 +0000 UTC Type:0 Mac:52:54:00:db:97:f1 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-869600 Clientid:01:52:54:00:db:97:f1}
	I0929 11:37:05.471319   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined IP address 192.168.50.21 and MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:37:05.471370   49203 main.go:141] libmachine: (pause-869600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:97:f1", ip: ""} in network mk-pause-869600: {Iface:virbr2 ExpiryTime:2025-09-29 12:35:30 +0000 UTC Type:0 Mac:52:54:00:db:97:f1 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-869600 Clientid:01:52:54:00:db:97:f1}
	I0929 11:37:05.471386   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined IP address 192.168.50.21 and MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:37:05.471556   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHPort
	I0929 11:37:05.471741   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHKeyPath
	I0929 11:37:05.471836   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHPort
	I0929 11:37:05.471878   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHUsername
	I0929 11:37:05.472021   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHKeyPath
	I0929 11:37:05.472206   49203 main.go:141] libmachine: (pause-869600) Calling .GetSSHUsername
	I0929 11:37:05.472227   49203 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/pause-869600/id_rsa Username:docker}
	I0929 11:37:05.472408   49203 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/pause-869600/id_rsa Username:docker}
	I0929 11:37:05.665207   49203 ssh_runner.go:195] Run: systemctl --version
	I0929 11:37:05.678094   49203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 11:37:05.941652   49203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0929 11:37:05.954523   49203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0929 11:37:05.954597   49203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:37:05.974182   49203 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 11:37:05.974208   49203 start.go:495] detecting cgroup driver to use...
	I0929 11:37:05.974283   49203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 11:37:06.014898   49203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:37:06.044699   49203 docker.go:218] disabling cri-docker service (if available) ...
	I0929 11:37:06.044773   49203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 11:37:06.099282   49203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 11:37:06.154209   49203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 11:37:06.565049   49203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 11:37:07.091726   49203 docker.go:234] disabling docker service ...
	I0929 11:37:07.091803   49203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 11:37:07.132976   49203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 11:37:07.170334   49203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 11:37:07.589189   49203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 11:37:07.914936   49203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 11:37:07.936973   49203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:37:07.974569   49203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 11:37:07.974660   49203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:07.996981   49203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 11:37:07.997051   49203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:08.019055   49203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:08.038448   49203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:08.061711   49203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 11:37:08.087104   49203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:08.110159   49203 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:08.132305   49203 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:08.155182   49203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 11:37:08.172826   49203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 11:37:08.195235   49203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:37:08.430756   49203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 11:37:18.349667   49203 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.918868705s)
	I0929 11:37:18.349697   49203 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 11:37:18.349771   49203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 11:37:18.355848   49203 start.go:563] Will wait 60s for crictl version
	I0929 11:37:18.355928   49203 ssh_runner.go:195] Run: which crictl
	I0929 11:37:18.360508   49203 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 11:37:18.401697   49203 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0929 11:37:18.401793   49203 ssh_runner.go:195] Run: crio --version
	I0929 11:37:18.431928   49203 ssh_runner.go:195] Run: crio --version
	I0929 11:37:18.467005   49203 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0929 11:37:18.468025   49203 main.go:141] libmachine: (pause-869600) Calling .GetIP
	I0929 11:37:18.471919   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:37:18.472392   49203 main.go:141] libmachine: (pause-869600) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:97:f1", ip: ""} in network mk-pause-869600: {Iface:virbr2 ExpiryTime:2025-09-29 12:35:30 +0000 UTC Type:0 Mac:52:54:00:db:97:f1 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-869600 Clientid:01:52:54:00:db:97:f1}
	I0929 11:37:18.472419   49203 main.go:141] libmachine: (pause-869600) DBG | domain pause-869600 has defined IP address 192.168.50.21 and MAC address 52:54:00:db:97:f1 in network mk-pause-869600
	I0929 11:37:18.472683   49203 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0929 11:37:18.478171   49203 kubeadm.go:875] updating cluster {Name:pause-869600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-86960
0 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 11:37:18.478300   49203 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:37:18.478374   49203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:37:18.541026   49203 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 11:37:18.541054   49203 crio.go:433] Images already preloaded, skipping extraction
	I0929 11:37:18.541105   49203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:37:18.583200   49203 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 11:37:18.583228   49203 cache_images.go:85] Images are preloaded, skipping loading
	I0929 11:37:18.583238   49203 kubeadm.go:926] updating node { 192.168.50.21 8443 v1.34.0 crio true true} ...
	I0929 11:37:18.583387   49203 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-869600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:pause-869600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 11:37:18.583475   49203 ssh_runner.go:195] Run: crio config
	I0929 11:37:18.642576   49203 cni.go:84] Creating CNI manager for ""
	I0929 11:37:18.642607   49203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 11:37:18.642624   49203 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 11:37:18.642652   49203 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.21 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-869600 NodeName:pause-869600 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 11:37:18.642821   49203 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-869600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.21"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.21"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 11:37:18.642903   49203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 11:37:18.660523   49203 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 11:37:18.660594   49203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 11:37:18.673292   49203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0929 11:37:18.695453   49203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 11:37:18.718010   49203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I0929 11:37:18.740281   49203 ssh_runner.go:195] Run: grep 192.168.50.21	control-plane.minikube.internal$ /etc/hosts
	I0929 11:37:18.745149   49203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:37:18.929444   49203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:37:19.016198   49203 certs.go:68] Setting up /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/pause-869600 for IP: 192.168.50.21
	I0929 11:37:19.016227   49203 certs.go:194] generating shared ca certs ...
	I0929 11:37:19.016249   49203 certs.go:226] acquiring lock for ca certs: {Name:mk991a8b4541d4c7b4b7bab2e7dfb0450ec66a3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:37:19.016472   49203 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21657-3816/.minikube/ca.key
	I0929 11:37:19.016543   49203 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.key
	I0929 11:37:19.016573   49203 certs.go:256] generating profile certs ...
	I0929 11:37:19.016772   49203 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/pause-869600/client.key
	I0929 11:37:19.016873   49203 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/pause-869600/apiserver.key.7af55132
	I0929 11:37:19.016957   49203 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/pause-869600/proxy-client.key
	I0929 11:37:19.017116   49203 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/7691.pem (1338 bytes)
	W0929 11:37:19.017187   49203 certs.go:480] ignoring /home/jenkins/minikube-integration/21657-3816/.minikube/certs/7691_empty.pem, impossibly tiny 0 bytes
	I0929 11:37:19.017202   49203 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 11:37:19.017237   49203 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem (1082 bytes)
	I0929 11:37:19.017290   49203 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem (1123 bytes)
	I0929 11:37:19.017401   49203 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/key.pem (1679 bytes)
	I0929 11:37:19.017482   49203 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3816/.minikube/files/etc/ssl/certs/76912.pem (1708 bytes)
	I0929 11:37:19.018384   49203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 11:37:19.093391   49203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 11:37:19.180986   49203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 11:37:19.248803   49203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 11:37:19.349010   49203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/pause-869600/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 11:37:19.431365   49203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/pause-869600/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 11:37:19.498780   49203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/pause-869600/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 11:37:19.577122   49203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/pause-869600/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 11:37:19.629196   49203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/files/etc/ssl/certs/76912.pem --> /usr/share/ca-certificates/76912.pem (1708 bytes)
	I0929 11:37:19.688375   49203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 11:37:19.742100   49203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/certs/7691.pem --> /usr/share/ca-certificates/7691.pem (1338 bytes)
	I0929 11:37:19.816242   49203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 11:37:19.897854   49203 ssh_runner.go:195] Run: openssl version
	I0929 11:37:19.908934   49203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7691.pem && ln -fs /usr/share/ca-certificates/7691.pem /etc/ssl/certs/7691.pem"
	I0929 11:37:19.951175   49203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7691.pem
	I0929 11:37:19.967083   49203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 10:34 /usr/share/ca-certificates/7691.pem
	I0929 11:37:19.967200   49203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7691.pem
	I0929 11:37:19.987017   49203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7691.pem /etc/ssl/certs/51391683.0"
	I0929 11:37:20.020124   49203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/76912.pem && ln -fs /usr/share/ca-certificates/76912.pem /etc/ssl/certs/76912.pem"
	I0929 11:37:20.053605   49203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/76912.pem
	I0929 11:37:20.072192   49203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 10:34 /usr/share/ca-certificates/76912.pem
	I0929 11:37:20.072253   49203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/76912.pem
	I0929 11:37:20.096032   49203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/76912.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 11:37:20.139188   49203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 11:37:20.184504   49203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:37:20.205931   49203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:37:20.206012   49203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:37:20.222936   49203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 11:37:20.246624   49203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 11:37:20.259136   49203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 11:37:20.278296   49203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 11:37:20.295277   49203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 11:37:20.310887   49203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 11:37:20.332222   49203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 11:37:20.354254   49203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 11:37:20.364975   49203 kubeadm.go:392] StartCluster: {Name:pause-869600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-869600 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:37:20.365122   49203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 11:37:20.365213   49203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 11:37:20.482022   49203 cri.go:89] found id: "dfd71b5df5fc63eed6ab9ed4312f5ac89cb9a39ed215fb2bbe3206f0bd304aa6"
	I0929 11:37:20.482051   49203 cri.go:89] found id: "37c3be3aca1bf0fcfc2a3982fb21166d69291bc135dbcd0f54a12f1d73936210"
	I0929 11:37:20.482056   49203 cri.go:89] found id: "d57516244a568a42d32547537cecc48aaaf8039cc3d9a1c635898c7ddc4f88db"
	I0929 11:37:20.482061   49203 cri.go:89] found id: "97dd7f4f8e1b5ff7f993ece1ee3a6d02b8e0895abffdfbaf28e77b46e69be30d"
	I0929 11:37:20.482065   49203 cri.go:89] found id: "631a9b239bbb6fd197ae60b88d99e744110391cb1fec84f6dc355431195eed2c"
	I0929 11:37:20.482070   49203 cri.go:89] found id: "18662ffc7957d82e832fd60a0dd22039d2188d9064d4f00d81fcc63c47edc72a"
	I0929 11:37:20.482074   49203 cri.go:89] found id: "8b2c6c969a9b83a4ed26224ca61fc2e81d97539c25dc6de94f2d71769643c9c1"
	I0929 11:37:20.482078   49203 cri.go:89] found id: "fb7db6d5e27e0bf2b3ec273e79652733a9ed4dc3e6d2997328090f52f6965deb"
	I0929 11:37:20.482082   49203 cri.go:89] found id: "b3c53ad62c5484e0a0b54ba98da2cf06a7698632f2b4ee0ce2e73ba1dd58f0af"
	I0929 11:37:20.482090   49203 cri.go:89] found id: "3f7d29856a2242bed628b05be2b9c95fe2ac7f07f17eea4cd8a42b9e7c574cf1"
	I0929 11:37:20.482095   49203 cri.go:89] found id: "669e709381afa32d8813dfd23a9f3e7907e8a235b511de1d87a980818f6bf0ce"
	I0929 11:37:20.482099   49203 cri.go:89] found id: ""
	I0929 11:37:20.482154   49203 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-869600 -n pause-869600
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-869600 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-869600 logs -n 25: (1.769974521s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-095431 --schedule 15s                                                                                                                            │ scheduled-stop-095431     │ jenkins │ v1.37.0 │ 29 Sep 25 11:33 UTC │                     │
	│ stop    │ -p scheduled-stop-095431 --schedule 15s                                                                                                                            │ scheduled-stop-095431     │ jenkins │ v1.37.0 │ 29 Sep 25 11:33 UTC │                     │
	│ stop    │ -p scheduled-stop-095431 --cancel-scheduled                                                                                                                        │ scheduled-stop-095431     │ jenkins │ v1.37.0 │ 29 Sep 25 11:33 UTC │ 29 Sep 25 11:33 UTC │
	│ stop    │ -p scheduled-stop-095431 --schedule 15s                                                                                                                            │ scheduled-stop-095431     │ jenkins │ v1.37.0 │ 29 Sep 25 11:34 UTC │                     │
	│ stop    │ -p scheduled-stop-095431 --schedule 15s                                                                                                                            │ scheduled-stop-095431     │ jenkins │ v1.37.0 │ 29 Sep 25 11:34 UTC │                     │
	│ stop    │ -p scheduled-stop-095431 --schedule 15s                                                                                                                            │ scheduled-stop-095431     │ jenkins │ v1.37.0 │ 29 Sep 25 11:34 UTC │ 29 Sep 25 11:34 UTC │
	│ delete  │ -p scheduled-stop-095431                                                                                                                                           │ scheduled-stop-095431     │ jenkins │ v1.37.0 │ 29 Sep 25 11:34 UTC │ 29 Sep 25 11:34 UTC │
	│ start   │ -p force-systemd-env-887444 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                               │ force-systemd-env-887444  │ jenkins │ v1.37.0 │ 29 Sep 25 11:34 UTC │ 29 Sep 25 11:35 UTC │
	│ start   │ -p offline-crio-857340 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ offline-crio-857340       │ jenkins │ v1.37.0 │ 29 Sep 25 11:34 UTC │ 29 Sep 25 11:37 UTC │
	│ start   │ -p pause-869600 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ pause-869600              │ jenkins │ v1.37.0 │ 29 Sep 25 11:34 UTC │ 29 Sep 25 11:36 UTC │
	│ start   │ -p stopped-upgrade-880748 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ stopped-upgrade-880748    │ jenkins │ v1.32.0 │ 29 Sep 25 11:34 UTC │ 29 Sep 25 11:36 UTC │
	│ delete  │ -p force-systemd-env-887444                                                                                                                                        │ force-systemd-env-887444  │ jenkins │ v1.37.0 │ 29 Sep 25 11:35 UTC │ 29 Sep 25 11:35 UTC │
	│ start   │ -p kubernetes-upgrade-197761 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-197761 │ jenkins │ v1.37.0 │ 29 Sep 25 11:35 UTC │ 29 Sep 25 11:36 UTC │
	│ stop    │ stopped-upgrade-880748 stop                                                                                                                                        │ stopped-upgrade-880748    │ jenkins │ v1.32.0 │ 29 Sep 25 11:36 UTC │ 29 Sep 25 11:36 UTC │
	│ start   │ -p stopped-upgrade-880748 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ stopped-upgrade-880748    │ jenkins │ v1.37.0 │ 29 Sep 25 11:36 UTC │ 29 Sep 25 11:37 UTC │
	│ start   │ -p pause-869600 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ pause-869600              │ jenkins │ v1.37.0 │ 29 Sep 25 11:36 UTC │ 29 Sep 25 11:37 UTC │
	│ stop    │ -p kubernetes-upgrade-197761                                                                                                                                       │ kubernetes-upgrade-197761 │ jenkins │ v1.37.0 │ 29 Sep 25 11:36 UTC │ 29 Sep 25 11:36 UTC │
	│ start   │ -p kubernetes-upgrade-197761 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-197761 │ jenkins │ v1.37.0 │ 29 Sep 25 11:36 UTC │ 29 Sep 25 11:37 UTC │
	│ delete  │ -p offline-crio-857340                                                                                                                                             │ offline-crio-857340       │ jenkins │ v1.37.0 │ 29 Sep 25 11:37 UTC │ 29 Sep 25 11:37 UTC │
	│ start   │ -p cert-expiration-415186 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                   │ cert-expiration-415186    │ jenkins │ v1.37.0 │ 29 Sep 25 11:37 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-880748 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ stopped-upgrade-880748    │ jenkins │ v1.37.0 │ 29 Sep 25 11:37 UTC │                     │
	│ delete  │ -p stopped-upgrade-880748                                                                                                                                          │ stopped-upgrade-880748    │ jenkins │ v1.37.0 │ 29 Sep 25 11:37 UTC │ 29 Sep 25 11:37 UTC │
	│ start   │ -p force-systemd-flag-435555 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false              │ force-systemd-flag-435555 │ jenkins │ v1.37.0 │ 29 Sep 25 11:37 UTC │                     │
	│ start   │ -p kubernetes-upgrade-197761 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ kubernetes-upgrade-197761 │ jenkins │ v1.37.0 │ 29 Sep 25 11:37 UTC │                     │
	│ start   │ -p kubernetes-upgrade-197761 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-197761 │ jenkins │ v1.37.0 │ 29 Sep 25 11:37 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:37:41
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:37:41.144335   50225 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:37:41.144617   50225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:37:41.144627   50225 out.go:374] Setting ErrFile to fd 2...
	I0929 11:37:41.144633   50225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:37:41.144856   50225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
	I0929 11:37:41.145308   50225 out.go:368] Setting JSON to false
	I0929 11:37:41.146308   50225 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4806,"bootTime":1759141055,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:37:41.146421   50225 start.go:140] virtualization: kvm guest
	I0929 11:37:41.148691   50225 out.go:179] * [kubernetes-upgrade-197761] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:37:41.149979   50225 notify.go:220] Checking for updates...
	I0929 11:37:41.150018   50225 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 11:37:41.151437   50225 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:37:41.152856   50225 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 11:37:41.154282   50225 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 11:37:41.155326   50225 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:37:41.156624   50225 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:37:41.158155   50225 config.go:182] Loaded profile config "kubernetes-upgrade-197761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:37:41.158582   50225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:37:41.158629   50225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:37:41.173303   50225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35243
	I0929 11:37:41.173915   50225 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:37:41.174502   50225 main.go:141] libmachine: Using API Version  1
	I0929 11:37:41.174541   50225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:37:41.174875   50225 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:37:41.175216   50225 main.go:141] libmachine: (kubernetes-upgrade-197761) Calling .DriverName
	I0929 11:37:41.175544   50225 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:37:41.176028   50225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:37:41.176115   50225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:37:41.189312   50225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33569
	I0929 11:37:41.189746   50225 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:37:41.190136   50225 main.go:141] libmachine: Using API Version  1
	I0929 11:37:41.190159   50225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:37:41.190547   50225 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:37:41.190704   50225 main.go:141] libmachine: (kubernetes-upgrade-197761) Calling .DriverName
	I0929 11:37:41.224547   50225 out.go:179] * Using the kvm2 driver based on existing profile
	I0929 11:37:41.227527   50225 start.go:304] selected driver: kvm2
	I0929 11:37:41.227546   50225 start.go:924] validating driver "kvm2" against &{Name:kubernetes-upgrade-197761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.0 ClusterName:kubernetes-upgrade-197761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.6 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:37:41.227685   50225 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:37:41.228628   50225 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:37:41.228713   50225 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21657-3816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:37:41.242737   50225 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:37:41.242767   50225 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21657-3816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:37:41.257854   50225 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:37:41.258263   50225 cni.go:84] Creating CNI manager for ""
	I0929 11:37:41.258324   50225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 11:37:41.258399   50225 start.go:348] cluster config:
	{Name:kubernetes-upgrade-197761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubernetes-upgrade-197761 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.6 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:37:41.258507   50225 iso.go:125] acquiring lock: {Name:mk6893cf08d5f5d64906f89556bbcb1c3b23df2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:37:41.260300   50225 out.go:179] * Starting "kubernetes-upgrade-197761" primary control-plane node in "kubernetes-upgrade-197761" cluster
	I0929 11:37:39.503279   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:39.504032   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | no network interface addresses found for domain cert-expiration-415186 (source=lease)
	I0929 11:37:39.504048   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | trying to list again with source=arp
	I0929 11:37:39.504403   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | unable to find current IP address of domain cert-expiration-415186 in network mk-cert-expiration-415186 (interfaces detected: [])
	I0929 11:37:39.504423   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | I0929 11:37:39.504323   49965 retry.go:31] will retry after 4.388688088s: waiting for domain to come up
	W0929 11:37:40.112068   49203 pod_ready.go:104] pod "coredns-66bc5c9577-4jdvs" is not "Ready", error: <nil>
	I0929 11:37:42.108249   49203 pod_ready.go:94] pod "coredns-66bc5c9577-4jdvs" is "Ready"
	I0929 11:37:42.108274   49203 pod_ready.go:86] duration metric: took 6.505505339s for pod "coredns-66bc5c9577-4jdvs" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:37:42.111858   49203 pod_ready.go:83] waiting for pod "etcd-pause-869600" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 11:37:44.120335   49203 pod_ready.go:104] pod "etcd-pause-869600" is not "Ready", error: <nil>
	I0929 11:37:45.531618   49913 start.go:364] duration metric: took 28.11622315s to acquireMachinesLock for "force-systemd-flag-435555"
	I0929 11:37:45.531689   49913 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-435555 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.34.0 ClusterName:force-systemd-flag-435555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 11:37:45.531804   49913 start.go:125] createHost starting for "" (driver="kvm2")
	I0929 11:37:41.261523   50225 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:37:41.261560   50225 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 11:37:41.261580   50225 cache.go:58] Caching tarball of preloaded images
	I0929 11:37:41.261648   50225 preload.go:172] Found /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 11:37:41.261663   50225 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 11:37:41.261759   50225 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kubernetes-upgrade-197761/config.json ...
	I0929 11:37:41.261969   50225 start.go:360] acquireMachinesLock for kubernetes-upgrade-197761: {Name:mk5aa1ba007c5e25969fbfeac9bb0aa5318bfa89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0929 11:37:43.894723   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:43.895468   49611 main.go:141] libmachine: (cert-expiration-415186) found domain IP: 192.168.39.205
	I0929 11:37:43.895494   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has current primary IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:43.895502   49611 main.go:141] libmachine: (cert-expiration-415186) reserving static IP address...
	I0929 11:37:43.895933   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | unable to find host DHCP lease matching {name: "cert-expiration-415186", mac: "52:54:00:0d:1e:1e", ip: "192.168.39.205"} in network mk-cert-expiration-415186
	I0929 11:37:44.149085   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | Getting to WaitForSSH function...
	I0929 11:37:44.149108   49611 main.go:141] libmachine: (cert-expiration-415186) reserved static IP address 192.168.39.205 for domain cert-expiration-415186
	I0929 11:37:44.149123   49611 main.go:141] libmachine: (cert-expiration-415186) waiting for SSH...
	I0929 11:37:44.152648   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.153224   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:44.153251   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.153497   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | Using SSH client type: external
	I0929 11:37:44.153512   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | Using SSH private key: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/cert-expiration-415186/id_rsa (-rw-------)
	I0929 11:37:44.153544   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.205 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21657-3816/.minikube/machines/cert-expiration-415186/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0929 11:37:44.153559   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | About to run SSH command:
	I0929 11:37:44.153569   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | exit 0
	I0929 11:37:44.289213   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | SSH cmd err, output: <nil>: 
	I0929 11:37:44.289533   49611 main.go:141] libmachine: (cert-expiration-415186) domain creation complete
	I0929 11:37:44.289942   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetConfigRaw
	I0929 11:37:44.290689   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .DriverName
	I0929 11:37:44.290925   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .DriverName
	I0929 11:37:44.291123   49611 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0929 11:37:44.291132   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetState
	I0929 11:37:44.292836   49611 main.go:141] libmachine: Detecting operating system of created instance...
	I0929 11:37:44.292846   49611 main.go:141] libmachine: Waiting for SSH to be available...
	I0929 11:37:44.292852   49611 main.go:141] libmachine: Getting to WaitForSSH function...
	I0929 11:37:44.292858   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:44.296539   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.296996   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:44.297028   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.297272   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHPort
	I0929 11:37:44.297509   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:44.297676   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:44.297830   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHUsername
	I0929 11:37:44.298007   49611 main.go:141] libmachine: Using SSH client type: native
	I0929 11:37:44.298224   49611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0929 11:37:44.298229   49611 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0929 11:37:44.398470   49611 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:37:44.398484   49611 main.go:141] libmachine: Detecting the provisioner...
	I0929 11:37:44.398492   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:44.401906   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.402285   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:44.402303   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.402535   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHPort
	I0929 11:37:44.402756   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:44.402902   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:44.403020   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHUsername
	I0929 11:37:44.403151   49611 main.go:141] libmachine: Using SSH client type: native
	I0929 11:37:44.403387   49611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0929 11:37:44.403392   49611 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0929 11:37:44.507774   49611 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0929 11:37:44.507830   49611 main.go:141] libmachine: found compatible host: buildroot
	I0929 11:37:44.507835   49611 main.go:141] libmachine: Provisioning with buildroot...
	I0929 11:37:44.507842   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetMachineName
	I0929 11:37:44.508111   49611 buildroot.go:166] provisioning hostname "cert-expiration-415186"
	I0929 11:37:44.508134   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetMachineName
	I0929 11:37:44.508340   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:44.512340   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.512887   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:44.512933   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.513085   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHPort
	I0929 11:37:44.513265   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:44.513434   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:44.513583   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHUsername
	I0929 11:37:44.513745   49611 main.go:141] libmachine: Using SSH client type: native
	I0929 11:37:44.514036   49611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0929 11:37:44.514046   49611 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-415186 && echo "cert-expiration-415186" | sudo tee /etc/hostname
	I0929 11:37:44.638787   49611 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-415186
	
	I0929 11:37:44.638808   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:44.642513   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.643028   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:44.643058   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.643326   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHPort
	I0929 11:37:44.643535   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:44.643719   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:44.643881   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHUsername
	I0929 11:37:44.644032   49611 main.go:141] libmachine: Using SSH client type: native
	I0929 11:37:44.644219   49611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0929 11:37:44.644231   49611 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-415186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-415186/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-415186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 11:37:44.762691   49611 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:37:44.762710   49611 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21657-3816/.minikube CaCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21657-3816/.minikube}
	I0929 11:37:44.762746   49611 buildroot.go:174] setting up certificates
	I0929 11:37:44.762758   49611 provision.go:84] configureAuth start
	I0929 11:37:44.762768   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetMachineName
	I0929 11:37:44.763121   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetIP
	I0929 11:37:44.765895   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.766218   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:44.766253   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.766498   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:44.769682   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.770072   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:44.770088   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.770283   49611 provision.go:143] copyHostCerts
	I0929 11:37:44.770368   49611 exec_runner.go:144] found /home/jenkins/minikube-integration/21657-3816/.minikube/cert.pem, removing ...
	I0929 11:37:44.770381   49611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21657-3816/.minikube/cert.pem
	I0929 11:37:44.770460   49611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/cert.pem (1123 bytes)
	I0929 11:37:44.770563   49611 exec_runner.go:144] found /home/jenkins/minikube-integration/21657-3816/.minikube/key.pem, removing ...
	I0929 11:37:44.770568   49611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21657-3816/.minikube/key.pem
	I0929 11:37:44.770597   49611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/key.pem (1679 bytes)
	I0929 11:37:44.770645   49611 exec_runner.go:144] found /home/jenkins/minikube-integration/21657-3816/.minikube/ca.pem, removing ...
	I0929 11:37:44.770648   49611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21657-3816/.minikube/ca.pem
	I0929 11:37:44.770670   49611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/ca.pem (1082 bytes)
	I0929 11:37:44.770711   49611 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-415186 san=[127.0.0.1 192.168.39.205 cert-expiration-415186 localhost minikube]
	I0929 11:37:44.827038   49611 provision.go:177] copyRemoteCerts
	I0929 11:37:44.827093   49611 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 11:37:44.827113   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:44.830433   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.830723   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:44.830746   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.830950   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHPort
	I0929 11:37:44.831167   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:44.831305   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHUsername
	I0929 11:37:44.831459   49611 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/cert-expiration-415186/id_rsa Username:docker}
	I0929 11:37:44.914884   49611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 11:37:44.947698   49611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0929 11:37:44.979144   49611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 11:37:45.011907   49611 provision.go:87] duration metric: took 249.136952ms to configureAuth
	I0929 11:37:45.011929   49611 buildroot.go:189] setting minikube options for container-runtime
	I0929 11:37:45.012122   49611 config.go:182] Loaded profile config "cert-expiration-415186": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:37:45.012202   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:45.015495   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.015933   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:45.015962   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.016166   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHPort
	I0929 11:37:45.016367   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:45.016523   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:45.016679   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHUsername
	I0929 11:37:45.016797   49611 main.go:141] libmachine: Using SSH client type: native
	I0929 11:37:45.017002   49611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0929 11:37:45.017016   49611 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 11:37:45.267522   49611 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 11:37:45.267540   49611 main.go:141] libmachine: Checking connection to Docker...
	I0929 11:37:45.267549   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetURL
	I0929 11:37:45.269034   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | using libvirt version 8000000
	I0929 11:37:45.272230   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.272641   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:45.272664   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.272868   49611 main.go:141] libmachine: Docker is up and running!
	I0929 11:37:45.272875   49611 main.go:141] libmachine: Reticulating splines...
	I0929 11:37:45.272881   49611 client.go:171] duration metric: took 22.489300603s to LocalClient.Create
	I0929 11:37:45.272905   49611 start.go:167] duration metric: took 22.489364851s to libmachine.API.Create "cert-expiration-415186"
	I0929 11:37:45.272924   49611 start.go:293] postStartSetup for "cert-expiration-415186" (driver="kvm2")
	I0929 11:37:45.272945   49611 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 11:37:45.272960   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .DriverName
	I0929 11:37:45.273202   49611 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 11:37:45.273218   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:45.275954   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.276303   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:45.276321   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.276641   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHPort
	I0929 11:37:45.276827   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:45.277005   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHUsername
	I0929 11:37:45.277145   49611 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/cert-expiration-415186/id_rsa Username:docker}
	I0929 11:37:45.361606   49611 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 11:37:45.367115   49611 info.go:137] Remote host: Buildroot 2025.02
	I0929 11:37:45.367130   49611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-3816/.minikube/addons for local assets ...
	I0929 11:37:45.367195   49611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-3816/.minikube/files for local assets ...
	I0929 11:37:45.367287   49611 filesync.go:149] local asset: /home/jenkins/minikube-integration/21657-3816/.minikube/files/etc/ssl/certs/76912.pem -> 76912.pem in /etc/ssl/certs
	I0929 11:37:45.367403   49611 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 11:37:45.380447   49611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/files/etc/ssl/certs/76912.pem --> /etc/ssl/certs/76912.pem (1708 bytes)
	I0929 11:37:45.414584   49611 start.go:296] duration metric: took 141.64863ms for postStartSetup
	I0929 11:37:45.414618   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetConfigRaw
	I0929 11:37:45.415346   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetIP
	I0929 11:37:45.418822   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.419253   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:45.419273   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.419590   49611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/cert-expiration-415186/config.json ...
	I0929 11:37:45.419797   49611 start.go:128] duration metric: took 22.658586465s to createHost
	I0929 11:37:45.419813   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:45.422236   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.422768   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:45.422789   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.423016   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHPort
	I0929 11:37:45.423190   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:45.423332   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:45.423491   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHUsername
	I0929 11:37:45.423643   49611 main.go:141] libmachine: Using SSH client type: native
	I0929 11:37:45.423844   49611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0929 11:37:45.423848   49611 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0929 11:37:45.531487   49611 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759145865.496425117
	
	I0929 11:37:45.531498   49611 fix.go:216] guest clock: 1759145865.496425117
	I0929 11:37:45.531514   49611 fix.go:229] Guest: 2025-09-29 11:37:45.496425117 +0000 UTC Remote: 2025-09-29 11:37:45.41980278 +0000 UTC m=+43.576999587 (delta=76.622337ms)
	I0929 11:37:45.531531   49611 fix.go:200] guest clock delta is within tolerance: 76.622337ms
	I0929 11:37:45.531534   49611 start.go:83] releasing machines lock for "cert-expiration-415186", held for 22.770583697s
	I0929 11:37:45.531558   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .DriverName
	I0929 11:37:45.531849   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetIP
	I0929 11:37:45.535441   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.535964   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:45.535989   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.536240   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .DriverName
	I0929 11:37:45.536783   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .DriverName
	I0929 11:37:45.537002   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .DriverName
	I0929 11:37:45.537107   49611 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 11:37:45.537145   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:45.537285   49611 ssh_runner.go:195] Run: cat /version.json
	I0929 11:37:45.537301   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:45.540833   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.540991   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.541297   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:45.541316   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.541341   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:45.541377   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.541524   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHPort
	I0929 11:37:45.541691   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:45.541781   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHPort
	I0929 11:37:45.541880   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHUsername
	I0929 11:37:45.541940   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:45.542014   49611 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/cert-expiration-415186/id_rsa Username:docker}
	I0929 11:37:45.542094   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHUsername
	I0929 11:37:45.542210   49611 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/cert-expiration-415186/id_rsa Username:docker}
	I0929 11:37:45.655252   49611 ssh_runner.go:195] Run: systemctl --version
	I0929 11:37:45.662656   49611 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 11:37:45.833726   49611 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0929 11:37:45.842977   49611 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0929 11:37:45.843031   49611 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:37:45.870533   49611 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0929 11:37:45.870548   49611 start.go:495] detecting cgroup driver to use...
	I0929 11:37:45.870651   49611 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 11:37:45.893748   49611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:37:45.912086   49611 docker.go:218] disabling cri-docker service (if available) ...
	I0929 11:37:45.912143   49611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 11:37:45.930123   49611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 11:37:45.949543   49611 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 11:37:46.104338   49611 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 11:37:46.323335   49611 docker.go:234] disabling docker service ...
	I0929 11:37:46.323411   49611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 11:37:46.341559   49611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 11:37:46.358427   49611 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 11:37:46.525285   49611 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 11:37:46.690963   49611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 11:37:46.708553   49611 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:37:46.740011   49611 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 11:37:46.740059   49611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:46.758488   49611 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 11:37:46.758535   49611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:46.777448   49611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:46.793104   49611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:46.808537   49611 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 11:37:46.824680   49611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:46.837871   49611 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:46.859136   49611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:46.873719   49611 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 11:37:45.534409   49913 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0929 11:37:45.534606   49913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:37:45.534652   49913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:37:45.549808   49913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46027
	I0929 11:37:45.550259   49913 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:37:45.550872   49913 main.go:141] libmachine: Using API Version  1
	I0929 11:37:45.550892   49913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:37:45.551374   49913 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:37:45.551643   49913 main.go:141] libmachine: (force-systemd-flag-435555) Calling .GetMachineName
	I0929 11:37:45.551822   49913 main.go:141] libmachine: (force-systemd-flag-435555) Calling .DriverName
	I0929 11:37:45.552023   49913 start.go:159] libmachine.API.Create for "force-systemd-flag-435555" (driver="kvm2")
	I0929 11:37:45.552065   49913 client.go:168] LocalClient.Create starting
	I0929 11:37:45.552104   49913 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem
	I0929 11:37:45.552154   49913 main.go:141] libmachine: Decoding PEM data...
	I0929 11:37:45.552188   49913 main.go:141] libmachine: Parsing certificate...
	I0929 11:37:45.552272   49913 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem
	I0929 11:37:45.552316   49913 main.go:141] libmachine: Decoding PEM data...
	I0929 11:37:45.552332   49913 main.go:141] libmachine: Parsing certificate...
	I0929 11:37:45.552377   49913 main.go:141] libmachine: Running pre-create checks...
	I0929 11:37:45.552390   49913 main.go:141] libmachine: (force-systemd-flag-435555) Calling .PreCreateCheck
	I0929 11:37:45.552773   49913 main.go:141] libmachine: (force-systemd-flag-435555) Calling .GetConfigRaw
	I0929 11:37:45.553401   49913 main.go:141] libmachine: Creating machine...
	I0929 11:37:45.553418   49913 main.go:141] libmachine: (force-systemd-flag-435555) Calling .Create
	I0929 11:37:45.553572   49913 main.go:141] libmachine: (force-systemd-flag-435555) creating domain...
	I0929 11:37:45.553592   49913 main.go:141] libmachine: (force-systemd-flag-435555) creating network...
	I0929 11:37:45.555099   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | found existing default network
	I0929 11:37:45.555244   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | <network connections='3'>
	I0929 11:37:45.555277   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <name>default</name>
	I0929 11:37:45.555296   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0929 11:37:45.555312   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <forward mode='nat'>
	I0929 11:37:45.555321   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <nat>
	I0929 11:37:45.555329   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <port start='1024' end='65535'/>
	I0929 11:37:45.555336   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </nat>
	I0929 11:37:45.555342   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   </forward>
	I0929 11:37:45.555366   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0929 11:37:45.555388   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0929 11:37:45.555401   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0929 11:37:45.555408   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <dhcp>
	I0929 11:37:45.555422   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0929 11:37:45.555429   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </dhcp>
	I0929 11:37:45.555440   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   </ip>
	I0929 11:37:45.555464   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | </network>
	I0929 11:37:45.555474   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | 
	I0929 11:37:45.556537   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:45.556326   50310 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:78:90:ea} reservation:<nil>}
	I0929 11:37:45.557094   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:45.557007   50310 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:c2:17:dc} reservation:<nil>}
	I0929 11:37:45.557884   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:45.557795   50310 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00025ab60}
	I0929 11:37:45.557911   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | defining private network:
	I0929 11:37:45.557923   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | 
	I0929 11:37:45.557935   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | <network>
	I0929 11:37:45.557948   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <name>mk-force-systemd-flag-435555</name>
	I0929 11:37:45.557957   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <dns enable='no'/>
	I0929 11:37:45.557967   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0929 11:37:45.557977   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <dhcp>
	I0929 11:37:45.557987   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0929 11:37:45.558002   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </dhcp>
	I0929 11:37:45.558036   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   </ip>
	I0929 11:37:45.558060   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | </network>
	I0929 11:37:45.558127   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | 
	I0929 11:37:45.563958   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | creating private network mk-force-systemd-flag-435555 192.168.61.0/24...
	I0929 11:37:45.648794   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | private network mk-force-systemd-flag-435555 192.168.61.0/24 created
	I0929 11:37:45.649102   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | <network>
	I0929 11:37:45.649119   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <name>mk-force-systemd-flag-435555</name>
	I0929 11:37:45.649128   49913 main.go:141] libmachine: (force-systemd-flag-435555) setting up store path in /home/jenkins/minikube-integration/21657-3816/.minikube/machines/force-systemd-flag-435555 ...
	I0929 11:37:45.649142   49913 main.go:141] libmachine: (force-systemd-flag-435555) building disk image from file:///home/jenkins/minikube-integration/21657-3816/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0929 11:37:45.649149   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <uuid>7524cf0e-8669-4a69-bfd1-5fc5c400d096</uuid>
	I0929 11:37:45.649157   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <bridge name='virbr3' stp='on' delay='0'/>
	I0929 11:37:45.649170   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <mac address='52:54:00:af:c0:73'/>
	I0929 11:37:45.649181   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <dns enable='no'/>
	I0929 11:37:45.649197   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0929 11:37:45.649220   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <dhcp>
	I0929 11:37:45.649231   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0929 11:37:45.649236   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </dhcp>
	I0929 11:37:45.649243   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   </ip>
	I0929 11:37:45.649252   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | </network>
	I0929 11:37:45.649264   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | 
	I0929 11:37:45.649282   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:45.649137   50310 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 11:37:45.649383   49913 main.go:141] libmachine: (force-systemd-flag-435555) Downloading /home/jenkins/minikube-integration/21657-3816/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21657-3816/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I0929 11:37:45.871901   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:45.871728   50310 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/force-systemd-flag-435555/id_rsa...
	I0929 11:37:45.999118   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:45.998966   50310 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/force-systemd-flag-435555/force-systemd-flag-435555.rawdisk...
	I0929 11:37:45.999154   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | Writing magic tar header
	I0929 11:37:45.999173   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | Writing SSH key tar header
	I0929 11:37:45.999185   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:45.999138   50310 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21657-3816/.minikube/machines/force-systemd-flag-435555 ...
	I0929 11:37:45.999334   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/force-systemd-flag-435555
	I0929 11:37:45.999382   49913 main.go:141] libmachine: (force-systemd-flag-435555) setting executable bit set on /home/jenkins/minikube-integration/21657-3816/.minikube/machines/force-systemd-flag-435555 (perms=drwx------)
	I0929 11:37:45.999400   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21657-3816/.minikube/machines
	I0929 11:37:45.999421   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 11:37:45.999437   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21657-3816
	I0929 11:37:45.999476   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0929 11:37:45.999498   49913 main.go:141] libmachine: (force-systemd-flag-435555) setting executable bit set on /home/jenkins/minikube-integration/21657-3816/.minikube/machines (perms=drwxr-xr-x)
	I0929 11:37:45.999510   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | checking permissions on dir: /home/jenkins
	I0929 11:37:45.999529   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | checking permissions on dir: /home
	I0929 11:37:45.999541   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | skipping /home - not owner
	I0929 11:37:45.999556   49913 main.go:141] libmachine: (force-systemd-flag-435555) setting executable bit set on /home/jenkins/minikube-integration/21657-3816/.minikube (perms=drwxr-xr-x)
	I0929 11:37:45.999567   49913 main.go:141] libmachine: (force-systemd-flag-435555) setting executable bit set on /home/jenkins/minikube-integration/21657-3816 (perms=drwxrwxr-x)
	I0929 11:37:45.999598   49913 main.go:141] libmachine: (force-systemd-flag-435555) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0929 11:37:45.999628   49913 main.go:141] libmachine: (force-systemd-flag-435555) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0929 11:37:45.999640   49913 main.go:141] libmachine: (force-systemd-flag-435555) defining domain...
	I0929 11:37:46.000820   49913 main.go:141] libmachine: (force-systemd-flag-435555) defining domain using XML: 
	I0929 11:37:46.000842   49913 main.go:141] libmachine: (force-systemd-flag-435555) <domain type='kvm'>
	I0929 11:37:46.000867   49913 main.go:141] libmachine: (force-systemd-flag-435555)   <name>force-systemd-flag-435555</name>
	I0929 11:37:46.000886   49913 main.go:141] libmachine: (force-systemd-flag-435555)   <memory unit='MiB'>3072</memory>
	I0929 11:37:46.000896   49913 main.go:141] libmachine: (force-systemd-flag-435555)   <vcpu>2</vcpu>
	I0929 11:37:46.000906   49913 main.go:141] libmachine: (force-systemd-flag-435555)   <features>
	I0929 11:37:46.000915   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <acpi/>
	I0929 11:37:46.000925   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <apic/>
	I0929 11:37:46.000943   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <pae/>
	I0929 11:37:46.000953   49913 main.go:141] libmachine: (force-systemd-flag-435555)   </features>
	I0929 11:37:46.000998   49913 main.go:141] libmachine: (force-systemd-flag-435555)   <cpu mode='host-passthrough'>
	I0929 11:37:46.001032   49913 main.go:141] libmachine: (force-systemd-flag-435555)   </cpu>
	I0929 11:37:46.001045   49913 main.go:141] libmachine: (force-systemd-flag-435555)   <os>
	I0929 11:37:46.001056   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <type>hvm</type>
	I0929 11:37:46.001065   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <boot dev='cdrom'/>
	I0929 11:37:46.001076   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <boot dev='hd'/>
	I0929 11:37:46.001085   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <bootmenu enable='no'/>
	I0929 11:37:46.001091   49913 main.go:141] libmachine: (force-systemd-flag-435555)   </os>
	I0929 11:37:46.001100   49913 main.go:141] libmachine: (force-systemd-flag-435555)   <devices>
	I0929 11:37:46.001112   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <disk type='file' device='cdrom'>
	I0929 11:37:46.001129   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/force-systemd-flag-435555/boot2docker.iso'/>
	I0929 11:37:46.001139   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <target dev='hdc' bus='scsi'/>
	I0929 11:37:46.001147   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <readonly/>
	I0929 11:37:46.001155   49913 main.go:141] libmachine: (force-systemd-flag-435555)     </disk>
	I0929 11:37:46.001164   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <disk type='file' device='disk'>
	I0929 11:37:46.001177   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0929 11:37:46.001195   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/force-systemd-flag-435555/force-systemd-flag-435555.rawdisk'/>
	I0929 11:37:46.001206   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <target dev='hda' bus='virtio'/>
	I0929 11:37:46.001214   49913 main.go:141] libmachine: (force-systemd-flag-435555)     </disk>
	I0929 11:37:46.001225   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <interface type='network'>
	I0929 11:37:46.001235   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <source network='mk-force-systemd-flag-435555'/>
	I0929 11:37:46.001245   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <model type='virtio'/>
	I0929 11:37:46.001254   49913 main.go:141] libmachine: (force-systemd-flag-435555)     </interface>
	I0929 11:37:46.001265   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <interface type='network'>
	I0929 11:37:46.001273   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <source network='default'/>
	I0929 11:37:46.001285   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <model type='virtio'/>
	I0929 11:37:46.001292   49913 main.go:141] libmachine: (force-systemd-flag-435555)     </interface>
	I0929 11:37:46.001304   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <serial type='pty'>
	I0929 11:37:46.001314   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <target port='0'/>
	I0929 11:37:46.001330   49913 main.go:141] libmachine: (force-systemd-flag-435555)     </serial>
	I0929 11:37:46.001344   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <console type='pty'>
	I0929 11:37:46.001373   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <target type='serial' port='0'/>
	I0929 11:37:46.001392   49913 main.go:141] libmachine: (force-systemd-flag-435555)     </console>
	I0929 11:37:46.001401   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <rng model='virtio'>
	I0929 11:37:46.001413   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <backend model='random'>/dev/random</backend>
	I0929 11:37:46.001421   49913 main.go:141] libmachine: (force-systemd-flag-435555)     </rng>
	I0929 11:37:46.001431   49913 main.go:141] libmachine: (force-systemd-flag-435555)   </devices>
	I0929 11:37:46.001453   49913 main.go:141] libmachine: (force-systemd-flag-435555) </domain>
	I0929 11:37:46.001468   49913 main.go:141] libmachine: (force-systemd-flag-435555) 
	I0929 11:37:46.006699   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | domain force-systemd-flag-435555 has defined MAC address 52:54:00:cc:bc:06 in network default
	I0929 11:37:46.007512   49913 main.go:141] libmachine: (force-systemd-flag-435555) starting domain...
	I0929 11:37:46.007552   49913 main.go:141] libmachine: (force-systemd-flag-435555) ensuring networks are active...
	I0929 11:37:46.007566   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | domain force-systemd-flag-435555 has defined MAC address 52:54:00:ac:a5:58 in network mk-force-systemd-flag-435555
	I0929 11:37:46.008497   49913 main.go:141] libmachine: (force-systemd-flag-435555) Ensuring network default is active
	I0929 11:37:46.008880   49913 main.go:141] libmachine: (force-systemd-flag-435555) Ensuring network mk-force-systemd-flag-435555 is active
	I0929 11:37:46.009882   49913 main.go:141] libmachine: (force-systemd-flag-435555) getting domain XML...
	I0929 11:37:46.011308   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | starting domain XML:
	I0929 11:37:46.011329   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | <domain type='kvm'>
	I0929 11:37:46.011340   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <name>force-systemd-flag-435555</name>
	I0929 11:37:46.011368   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <uuid>69ceb9a2-2011-45f3-a825-e0cef8c12c06</uuid>
	I0929 11:37:46.011384   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <memory unit='KiB'>3145728</memory>
	I0929 11:37:46.011393   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I0929 11:37:46.011430   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <vcpu placement='static'>2</vcpu>
	I0929 11:37:46.011487   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <os>
	I0929 11:37:46.011505   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0929 11:37:46.011516   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <boot dev='cdrom'/>
	I0929 11:37:46.011525   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <boot dev='hd'/>
	I0929 11:37:46.011536   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <bootmenu enable='no'/>
	I0929 11:37:46.011562   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   </os>
	I0929 11:37:46.011587   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <features>
	I0929 11:37:46.011596   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <acpi/>
	I0929 11:37:46.011610   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <apic/>
	I0929 11:37:46.011644   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <pae/>
	I0929 11:37:46.011664   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   </features>
	I0929 11:37:46.011675   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0929 11:37:46.011697   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <clock offset='utc'/>
	I0929 11:37:46.011707   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <on_poweroff>destroy</on_poweroff>
	I0929 11:37:46.011718   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <on_reboot>restart</on_reboot>
	I0929 11:37:46.011776   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <on_crash>destroy</on_crash>
	I0929 11:37:46.011792   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <devices>
	I0929 11:37:46.011804   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0929 11:37:46.011812   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <disk type='file' device='cdrom'>
	I0929 11:37:46.011823   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <driver name='qemu' type='raw'/>
	I0929 11:37:46.011837   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/force-systemd-flag-435555/boot2docker.iso'/>
	I0929 11:37:46.011848   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <target dev='hdc' bus='scsi'/>
	I0929 11:37:46.011856   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <readonly/>
	I0929 11:37:46.011867   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0929 11:37:46.011874   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </disk>
	I0929 11:37:46.011885   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <disk type='file' device='disk'>
	I0929 11:37:46.011897   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0929 11:37:46.011913   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/force-systemd-flag-435555/force-systemd-flag-435555.rawdisk'/>
	I0929 11:37:46.011925   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <target dev='hda' bus='virtio'/>
	I0929 11:37:46.011937   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0929 11:37:46.011948   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </disk>
	I0929 11:37:46.011960   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0929 11:37:46.011986   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0929 11:37:46.012000   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </controller>
	I0929 11:37:46.012013   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0929 11:37:46.012027   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0929 11:37:46.012044   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0929 11:37:46.012057   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </controller>
	I0929 11:37:46.012069   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <interface type='network'>
	I0929 11:37:46.012082   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <mac address='52:54:00:ac:a5:58'/>
	I0929 11:37:46.012095   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <source network='mk-force-systemd-flag-435555'/>
	I0929 11:37:46.012107   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <model type='virtio'/>
	I0929 11:37:46.012124   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0929 11:37:46.012150   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </interface>
	I0929 11:37:46.012162   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <interface type='network'>
	I0929 11:37:46.012176   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <mac address='52:54:00:cc:bc:06'/>
	I0929 11:37:46.012186   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <source network='default'/>
	I0929 11:37:46.012199   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <model type='virtio'/>
	I0929 11:37:46.012215   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0929 11:37:46.012228   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </interface>
	I0929 11:37:46.012240   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <serial type='pty'>
	I0929 11:37:46.012254   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <target type='isa-serial' port='0'>
	I0929 11:37:46.012265   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |         <model name='isa-serial'/>
	I0929 11:37:46.012277   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       </target>
	I0929 11:37:46.012293   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </serial>
	I0929 11:37:46.012306   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <console type='pty'>
	I0929 11:37:46.012322   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <target type='serial' port='0'/>
	I0929 11:37:46.012336   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </console>
	I0929 11:37:46.012347   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <input type='mouse' bus='ps2'/>
	I0929 11:37:46.012396   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <input type='keyboard' bus='ps2'/>
	I0929 11:37:46.012433   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <audio id='1' type='none'/>
	I0929 11:37:46.012455   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <memballoon model='virtio'>
	I0929 11:37:46.012472   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0929 11:37:46.012485   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </memballoon>
	I0929 11:37:46.012495   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <rng model='virtio'>
	I0929 11:37:46.012507   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <backend model='random'>/dev/random</backend>
	I0929 11:37:46.012517   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0929 11:37:46.012528   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </rng>
	I0929 11:37:46.012535   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   </devices>
	I0929 11:37:46.012553   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | </domain>
	I0929 11:37:46.012567   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | 
	I0929 11:37:46.886241   49611 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0929 11:37:46.886297   49611 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0929 11:37:46.909620   49611 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 11:37:46.922962   49611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:37:47.095931   49611 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 11:37:47.231158   49611 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 11:37:47.231226   49611 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 11:37:47.237341   49611 start.go:563] Will wait 60s for crictl version
	I0929 11:37:47.237406   49611 ssh_runner.go:195] Run: which crictl
	I0929 11:37:47.242176   49611 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 11:37:47.299122   49611 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0929 11:37:47.299174   49611 ssh_runner.go:195] Run: crio --version
	I0929 11:37:47.339309   49611 ssh_runner.go:195] Run: crio --version
	I0929 11:37:47.377490   49611 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	W0929 11:37:46.618701   49203 pod_ready.go:104] pod "etcd-pause-869600" is not "Ready", error: <nil>
	I0929 11:37:47.620897   49203 pod_ready.go:94] pod "etcd-pause-869600" is "Ready"
	I0929 11:37:47.620930   49203 pod_ready.go:86] duration metric: took 5.509048337s for pod "etcd-pause-869600" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:37:47.624091   49203 pod_ready.go:83] waiting for pod "kube-apiserver-pause-869600" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:37:49.131965   49203 pod_ready.go:94] pod "kube-apiserver-pause-869600" is "Ready"
	I0929 11:37:49.131998   49203 pod_ready.go:86] duration metric: took 1.507874128s for pod "kube-apiserver-pause-869600" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:37:49.135317   49203 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-869600" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:37:49.141190   49203 pod_ready.go:94] pod "kube-controller-manager-pause-869600" is "Ready"
	I0929 11:37:49.141223   49203 pod_ready.go:86] duration metric: took 5.880041ms for pod "kube-controller-manager-pause-869600" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:37:49.143906   49203 pod_ready.go:83] waiting for pod "kube-proxy-7t7c5" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:37:49.150489   49203 pod_ready.go:94] pod "kube-proxy-7t7c5" is "Ready"
	I0929 11:37:49.150522   49203 pod_ready.go:86] duration metric: took 6.575659ms for pod "kube-proxy-7t7c5" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:37:49.216962   49203 pod_ready.go:83] waiting for pod "kube-scheduler-pause-869600" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:37:49.616515   49203 pod_ready.go:94] pod "kube-scheduler-pause-869600" is "Ready"
	I0929 11:37:49.616551   49203 pod_ready.go:86] duration metric: took 399.55305ms for pod "kube-scheduler-pause-869600" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:37:49.616569   49203 pod_ready.go:40] duration metric: took 14.018200469s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:37:49.663339   49203 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 11:37:49.667963   49203 out.go:179] * Done! kubectl is now configured to use "pause-869600" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.529288824Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df9df7b0-19da-4df0-861b-629f1dda9c75 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.529399549Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df9df7b0-19da-4df0-861b-629f1dda9c75 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.529823127Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72273cf18a35d4c987fc4338d0cc77370d3c090128a23932571ae87804282ff2,PodSandboxId:36ef9575d84a1f17efcc5478e9214c8d43b62aa3a9f8a049eda495c1961b65a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759145853459648772,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7t7c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e54a7fe-cc9b-4796-bd74-320f42680285,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78a357841af5c9b31b23c99e9125ca7804819bc640b2d98d750a6ce9a17d9f0c,PodSandboxId:8951e553ffe6d91eaa6e81ef5b3954d16e09b8ded5e723217f74d92bd4045a94,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759145853463868272,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4jdvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770ab4a0-2883-4324-b5a3-49ef080d5362,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0d38dfdf3c2a75cee84ad8a30fae04ce48b8aae7ad0b66f632b7e117f79dc7c,PodSandboxId:d9bda1da2de7d621e6d81e226372c1c4019cf462049b379de030f6f76aaf281f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759145848840432073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2efe53242661b5790267dd184a745f24,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:423b47a3127215ffbc1582c306e8a879b3aabb224d081d57bc6b2197ae485657,PodSandboxId:4280293148dfa2205512dbcca617b4659dbc005e04a4912ecd7cb483adf1041f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94
cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759145848846271910,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badb9000f601fb73a2daae9577e989ca,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91e28a3e927477988f57dfd7561d3d154531e57d8587c2f21b9a49d04b74329,PodSandboxId:0dd61cbeaea56f1ddb0985431adceb50f457497805e16ee9b7e6a236598111f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759145848829853938,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638a8c9a14963fecde6cef6f103917da,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a673a11069a02d9f4fda763aaf3e35c3f426ec7c5c8478124ae96f8fdbe8f03,PodSandboxId:90332d6e995f19c2a1626e9e82ca825eb77789770896bb85bcfe822981d3c90a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:4616
9d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759145848816358482,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f70a65abe9cf0d8fc12a4578e54cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37c3be3aca1bf0fcfc2a3982fb21166d69291bc135dbcd0f54a12f1d73936210,PodSandboxId:90332d6e995f19c2a1626e9e82ca825eb77789770896bb8
5bcfe822981d3c90a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759145839757282226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f70a65abe9cf0d8fc12a4578e54cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfd71b5df5fc63eed6ab9
ed4312f5ac89cb9a39ed215fb2bbe3206f0bd304aa6,PodSandboxId:d9bda1da2de7d621e6d81e226372c1c4019cf462049b379de030f6f76aaf281f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759145839766363598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2efe53242661b5790267dd184a745f24,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d57516244a568a42d32547537cecc48aaaf8039cc3d9a1c635898c7ddc4f88db,PodSandboxId:36ef9575d84a1f17efcc5478e9214c8d43b62aa3a9f8a049eda495c1961b65a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759145839616676685,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7t7c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e54a7fe-cc9b-4796-bd74-320f42680285,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97dd7f4f8e1b5ff7f993ece1ee3a6d02b8e0895abffdfbaf28e77b46e69be30d,PodSandboxId:0dd61cbeaea56f1ddb0985431adceb50f457497805e16ee9b7e6a236598111f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1759145839477261757,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638a8c9a14963fecde6cef6f103917da,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631a9b239bbb6fd197ae60b88d99e744110391cb1fec84f6dc355431195eed2c,PodSandboxId:4280293148dfa2205512dbcca617b4659dbc005e04a4912ecd7cb483adf1041f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759145839439294857,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badb9000f601fb73a2daae9577e989ca,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18662ffc7957d82e832fd60a0dd22039d2188d9064d4f00d81fcc63c47edc72a,PodSandboxId:4eb3b5bad382d68dc1c4a4b327e6bf2b7fcb3901689eb39a701c77688213b467,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759145827344122429,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4jdvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770ab4a0-2883-4324-b5a3-49ef080d5362,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df9df7b0-19da-4df0-861b-629f1dda9c75 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.556839601Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c086314a-4e1e-4575-9498-add3a43c3fe0 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.556981373Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c086314a-4e1e-4575-9498-add3a43c3fe0 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.559295550Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0b6de141-5f40-4e73-86a6-1a8f29f89b2a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.559849290Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759145870559825920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b6de141-5f40-4e73-86a6-1a8f29f89b2a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.561239637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae4e88d4-cc77-4406-ab0f-d81df3a2eb13 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.561336986Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae4e88d4-cc77-4406-ab0f-d81df3a2eb13 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.561585777Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72273cf18a35d4c987fc4338d0cc77370d3c090128a23932571ae87804282ff2,PodSandboxId:36ef9575d84a1f17efcc5478e9214c8d43b62aa3a9f8a049eda495c1961b65a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759145853459648772,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7t7c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e54a7fe-cc9b-4796-bd74-320f42680285,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78a357841af5c9b31b23c99e9125ca7804819bc640b2d98d750a6ce9a17d9f0c,PodSandboxId:8951e553ffe6d91eaa6e81ef5b3954d16e09b8ded5e723217f74d92bd4045a94,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759145853463868272,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4jdvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770ab4a0-2883-4324-b5a3-49ef080d5362,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0d38dfdf3c2a75cee84ad8a30fae04ce48b8aae7ad0b66f632b7e117f79dc7c,PodSandboxId:d9bda1da2de7d621e6d81e226372c1c4019cf462049b379de030f6f76aaf281f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759145848840432073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2efe53242661b5790267dd184a745f24,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:423b47a3127215ffbc1582c306e8a879b3aabb224d081d57bc6b2197ae485657,PodSandboxId:4280293148dfa2205512dbcca617b4659dbc005e04a4912ecd7cb483adf1041f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94
cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759145848846271910,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badb9000f601fb73a2daae9577e989ca,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91e28a3e927477988f57dfd7561d3d154531e57d8587c2f21b9a49d04b74329,PodSandboxId:0dd61cbeaea56f1ddb0985431adceb50f457497805e16ee9b7e6a236598111f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759145848829853938,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638a8c9a14963fecde6cef6f103917da,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a673a11069a02d9f4fda763aaf3e35c3f426ec7c5c8478124ae96f8fdbe8f03,PodSandboxId:90332d6e995f19c2a1626e9e82ca825eb77789770896bb85bcfe822981d3c90a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:4616
9d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759145848816358482,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f70a65abe9cf0d8fc12a4578e54cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37c3be3aca1bf0fcfc2a3982fb21166d69291bc135dbcd0f54a12f1d73936210,PodSandboxId:90332d6e995f19c2a1626e9e82ca825eb77789770896bb8
5bcfe822981d3c90a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759145839757282226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f70a65abe9cf0d8fc12a4578e54cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfd71b5df5fc63eed6ab9
ed4312f5ac89cb9a39ed215fb2bbe3206f0bd304aa6,PodSandboxId:d9bda1da2de7d621e6d81e226372c1c4019cf462049b379de030f6f76aaf281f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759145839766363598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2efe53242661b5790267dd184a745f24,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d57516244a568a42d32547537cecc48aaaf8039cc3d9a1c635898c7ddc4f88db,PodSandboxId:36ef9575d84a1f17efcc5478e9214c8d43b62aa3a9f8a049eda495c1961b65a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759145839616676685,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7t7c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e54a7fe-cc9b-4796-bd74-320f42680285,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97dd7f4f8e1b5ff7f993ece1ee3a6d02b8e0895abffdfbaf28e77b46e69be30d,PodSandboxId:0dd61cbeaea56f1ddb0985431adceb50f457497805e16ee9b7e6a236598111f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1759145839477261757,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638a8c9a14963fecde6cef6f103917da,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631a9b239bbb6fd197ae60b88d99e744110391cb1fec84f6dc355431195eed2c,PodSandboxId:4280293148dfa2205512dbcca617b4659dbc005e04a4912ecd7cb483adf1041f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759145839439294857,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badb9000f601fb73a2daae9577e989ca,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18662ffc7957d82e832fd60a0dd22039d2188d9064d4f00d81fcc63c47edc72a,PodSandboxId:4eb3b5bad382d68dc1c4a4b327e6bf2b7fcb3901689eb39a701c77688213b467,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759145827344122429,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4jdvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770ab4a0-2883-4324-b5a3-49ef080d5362,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae4e88d4-cc77-4406-ab0f-d81df3a2eb13 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.612546483Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26fe0bf9-e9ea-4bcc-af6b-d9b36a4ea98c name=/runtime.v1.RuntimeService/Version
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.612687877Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26fe0bf9-e9ea-4bcc-af6b-d9b36a4ea98c name=/runtime.v1.RuntimeService/Version
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.615804105Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=870c56cb-5c15-4c95-9619-ca18ad5ee2a8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.616343707Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759145870616319748,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=870c56cb-5c15-4c95-9619-ca18ad5ee2a8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.617200899Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e8a01979-9f81-43f6-b1e5-db8cbeaadec3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.617324750Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e8a01979-9f81-43f6-b1e5-db8cbeaadec3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.618027442Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72273cf18a35d4c987fc4338d0cc77370d3c090128a23932571ae87804282ff2,PodSandboxId:36ef9575d84a1f17efcc5478e9214c8d43b62aa3a9f8a049eda495c1961b65a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759145853459648772,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7t7c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e54a7fe-cc9b-4796-bd74-320f42680285,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78a357841af5c9b31b23c99e9125ca7804819bc640b2d98d750a6ce9a17d9f0c,PodSandboxId:8951e553ffe6d91eaa6e81ef5b3954d16e09b8ded5e723217f74d92bd4045a94,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759145853463868272,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4jdvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770ab4a0-2883-4324-b5a3-49ef080d5362,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0d38dfdf3c2a75cee84ad8a30fae04ce48b8aae7ad0b66f632b7e117f79dc7c,PodSandboxId:d9bda1da2de7d621e6d81e226372c1c4019cf462049b379de030f6f76aaf281f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759145848840432073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2efe53242661b5790267dd184a745f24,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:423b47a3127215ffbc1582c306e8a879b3aabb224d081d57bc6b2197ae485657,PodSandboxId:4280293148dfa2205512dbcca617b4659dbc005e04a4912ecd7cb483adf1041f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94
cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759145848846271910,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badb9000f601fb73a2daae9577e989ca,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91e28a3e927477988f57dfd7561d3d154531e57d8587c2f21b9a49d04b74329,PodSandboxId:0dd61cbeaea56f1ddb0985431adceb50f457497805e16ee9b7e6a236598111f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759145848829853938,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638a8c9a14963fecde6cef6f103917da,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a673a11069a02d9f4fda763aaf3e35c3f426ec7c5c8478124ae96f8fdbe8f03,PodSandboxId:90332d6e995f19c2a1626e9e82ca825eb77789770896bb85bcfe822981d3c90a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:4616
9d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759145848816358482,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f70a65abe9cf0d8fc12a4578e54cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37c3be3aca1bf0fcfc2a3982fb21166d69291bc135dbcd0f54a12f1d73936210,PodSandboxId:90332d6e995f19c2a1626e9e82ca825eb77789770896bb8
5bcfe822981d3c90a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759145839757282226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f70a65abe9cf0d8fc12a4578e54cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfd71b5df5fc63eed6ab9
ed4312f5ac89cb9a39ed215fb2bbe3206f0bd304aa6,PodSandboxId:d9bda1da2de7d621e6d81e226372c1c4019cf462049b379de030f6f76aaf281f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759145839766363598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2efe53242661b5790267dd184a745f24,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d57516244a568a42d32547537cecc48aaaf8039cc3d9a1c635898c7ddc4f88db,PodSandboxId:36ef9575d84a1f17efcc5478e9214c8d43b62aa3a9f8a049eda495c1961b65a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759145839616676685,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7t7c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e54a7fe-cc9b-4796-bd74-320f42680285,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97dd7f4f8e1b5ff7f993ece1ee3a6d02b8e0895abffdfbaf28e77b46e69be30d,PodSandboxId:0dd61cbeaea56f1ddb0985431adceb50f457497805e16ee9b7e6a236598111f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1759145839477261757,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638a8c9a14963fecde6cef6f103917da,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631a9b239bbb6fd197ae60b88d99e744110391cb1fec84f6dc355431195eed2c,PodSandboxId:4280293148dfa2205512dbcca617b4659dbc005e04a4912ecd7cb483adf1041f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759145839439294857,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badb9000f601fb73a2daae9577e989ca,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18662ffc7957d82e832fd60a0dd22039d2188d9064d4f00d81fcc63c47edc72a,PodSandboxId:4eb3b5bad382d68dc1c4a4b327e6bf2b7fcb3901689eb39a701c77688213b467,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759145827344122429,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4jdvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770ab4a0-2883-4324-b5a3-49ef080d5362,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e8a01979-9f81-43f6-b1e5-db8cbeaadec3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.672368641Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff80da61-e3ae-4502-9511-a2db6e3dddcb name=/runtime.v1.RuntimeService/Version
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.672496399Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff80da61-e3ae-4502-9511-a2db6e3dddcb name=/runtime.v1.RuntimeService/Version
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.674466341Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa0e5699-1456-475c-873c-ebe34bab0b1d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.675234943Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759145870675192373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa0e5699-1456-475c-873c-ebe34bab0b1d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.676086933Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9819bbe4-3dcb-4370-8fe5-9a9b56a25fa6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.676161396Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9819bbe4-3dcb-4370-8fe5-9a9b56a25fa6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.676548616Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72273cf18a35d4c987fc4338d0cc77370d3c090128a23932571ae87804282ff2,PodSandboxId:36ef9575d84a1f17efcc5478e9214c8d43b62aa3a9f8a049eda495c1961b65a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759145853459648772,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7t7c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e54a7fe-cc9b-4796-bd74-320f42680285,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78a357841af5c9b31b23c99e9125ca7804819bc640b2d98d750a6ce9a17d9f0c,PodSandboxId:8951e553ffe6d91eaa6e81ef5b3954d16e09b8ded5e723217f74d92bd4045a94,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759145853463868272,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4jdvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770ab4a0-2883-4324-b5a3-49ef080d5362,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0d38dfdf3c2a75cee84ad8a30fae04ce48b8aae7ad0b66f632b7e117f79dc7c,PodSandboxId:d9bda1da2de7d621e6d81e226372c1c4019cf462049b379de030f6f76aaf281f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759145848840432073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2efe53242661b5790267dd184a745f24,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:423b47a3127215ffbc1582c306e8a879b3aabb224d081d57bc6b2197ae485657,PodSandboxId:4280293148dfa2205512dbcca617b4659dbc005e04a4912ecd7cb483adf1041f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94
cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759145848846271910,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badb9000f601fb73a2daae9577e989ca,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91e28a3e927477988f57dfd7561d3d154531e57d8587c2f21b9a49d04b74329,PodSandboxId:0dd61cbeaea56f1ddb0985431adceb50f457497805e16ee9b7e6a236598111f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759145848829853938,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638a8c9a14963fecde6cef6f103917da,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a673a11069a02d9f4fda763aaf3e35c3f426ec7c5c8478124ae96f8fdbe8f03,PodSandboxId:90332d6e995f19c2a1626e9e82ca825eb77789770896bb85bcfe822981d3c90a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:4616
9d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759145848816358482,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f70a65abe9cf0d8fc12a4578e54cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37c3be3aca1bf0fcfc2a3982fb21166d69291bc135dbcd0f54a12f1d73936210,PodSandboxId:90332d6e995f19c2a1626e9e82ca825eb77789770896bb8
5bcfe822981d3c90a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759145839757282226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f70a65abe9cf0d8fc12a4578e54cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfd71b5df5fc63eed6ab9
ed4312f5ac89cb9a39ed215fb2bbe3206f0bd304aa6,PodSandboxId:d9bda1da2de7d621e6d81e226372c1c4019cf462049b379de030f6f76aaf281f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759145839766363598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2efe53242661b5790267dd184a745f24,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d57516244a568a42d32547537cecc48aaaf8039cc3d9a1c635898c7ddc4f88db,PodSandboxId:36ef9575d84a1f17efcc5478e9214c8d43b62aa3a9f8a049eda495c1961b65a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759145839616676685,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7t7c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e54a7fe-cc9b-4796-bd74-320f42680285,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97dd7f4f8e1b5ff7f993ece1ee3a6d02b8e0895abffdfbaf28e77b46e69be30d,PodSandboxId:0dd61cbeaea56f1ddb0985431adceb50f457497805e16ee9b7e6a236598111f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1759145839477261757,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638a8c9a14963fecde6cef6f103917da,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631a9b239bbb6fd197ae60b88d99e744110391cb1fec84f6dc355431195eed2c,PodSandboxId:4280293148dfa2205512dbcca617b4659dbc005e04a4912ecd7cb483adf1041f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759145839439294857,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badb9000f601fb73a2daae9577e989ca,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18662ffc7957d82e832fd60a0dd22039d2188d9064d4f00d81fcc63c47edc72a,PodSandboxId:4eb3b5bad382d68dc1c4a4b327e6bf2b7fcb3901689eb39a701c77688213b467,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759145827344122429,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4jdvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770ab4a0-2883-4324-b5a3-49ef080d5362,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9819bbe4-3dcb-4370-8fe5-9a9b56a25fa6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:50 pause-869600 crio[3357]: time="2025-09-29 11:37:50.681730823Z" level=debug msg="received signal" file="crio/main.go:57" signal="broken pipe"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	78a357841af5c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   17 seconds ago      Running             coredns                   2                   8951e553ffe6d       coredns-66bc5c9577-4jdvs
	72273cf18a35d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   17 seconds ago      Running             kube-proxy                3                   36ef9575d84a1       kube-proxy-7t7c5
	423b47a312721       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   21 seconds ago      Running             etcd                      3                   4280293148dfa       etcd-pause-869600
	e0d38dfdf3c2a       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   21 seconds ago      Running             kube-controller-manager   3                   d9bda1da2de7d       kube-controller-manager-pause-869600
	e91e28a3e9274       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   21 seconds ago      Running             kube-apiserver            3                   0dd61cbeaea56       kube-apiserver-pause-869600
	7a673a11069a0       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   21 seconds ago      Running             kube-scheduler            3                   90332d6e995f1       kube-scheduler-pause-869600
	dfd71b5df5fc6       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   31 seconds ago      Exited              kube-controller-manager   2                   d9bda1da2de7d       kube-controller-manager-pause-869600
	37c3be3aca1bf       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   31 seconds ago      Exited              kube-scheduler            2                   90332d6e995f1       kube-scheduler-pause-869600
	d57516244a568       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   31 seconds ago      Exited              kube-proxy                2                   36ef9575d84a1       kube-proxy-7t7c5
	97dd7f4f8e1b5       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   31 seconds ago      Exited              kube-apiserver            2                   0dd61cbeaea56       kube-apiserver-pause-869600
	631a9b239bbb6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   31 seconds ago      Exited              etcd                      2                   4280293148dfa       etcd-pause-869600
	18662ffc7957d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   43 seconds ago      Exited              coredns                   1                   4eb3b5bad382d       coredns-66bc5c9577-4jdvs
	
	
	==> coredns [18662ffc7957d82e832fd60a0dd22039d2188d9064d4f00d81fcc63c47edc72a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:57736 - 64303 "HINFO IN 1784267252011926155.4111428683321706473. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063623689s
	
	
	==> coredns [78a357841af5c9b31b23c99e9125ca7804819bc640b2d98d750a6ce9a17d9f0c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40788 - 23464 "HINFO IN 3474408110492409780.3150661101563447014. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.079109455s
	
	
	==> describe nodes <==
	Name:               pause-869600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-869600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c703192fb7638284bed1945941837d6f5d9e8170
	                    minikube.k8s.io/name=pause-869600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_35_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:35:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-869600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:37:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:37:32 +0000   Mon, 29 Sep 2025 11:35:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:37:32 +0000   Mon, 29 Sep 2025 11:35:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:37:32 +0000   Mon, 29 Sep 2025 11:35:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:37:32 +0000   Mon, 29 Sep 2025 11:35:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.21
	  Hostname:    pause-869600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	System Info:
	  Machine ID:                 eb4e27945a8d4c53957e1a8a4b7047e8
	  System UUID:                eb4e2794-5a8d-4c53-957e-1a8a4b7047e8
	  Boot ID:                    0d164a3b-aa8a-41a7-8f1b-a9b8bdbb05e2
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-4jdvs                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     113s
	  kube-system                 etcd-pause-869600                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m
	  kube-system                 kube-apiserver-pause-869600             250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-pause-869600    200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-7t7c5                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-pause-869600             100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 17s                  kube-proxy       
	  Normal  Starting                 110s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  2m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m4s (x7 over 2m5s)  kubelet          Node pause-869600 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m5s)  kubelet          Node pause-869600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m5s)  kubelet          Node pause-869600 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node pause-869600 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node pause-869600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node pause-869600 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                118s                 kubelet          Node pause-869600 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  118s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           114s                 node-controller  Node pause-869600 event: Registered Node pause-869600 in Controller
	  Normal  Starting                 23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)    kubelet          Node pause-869600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)    kubelet          Node pause-869600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)    kubelet          Node pause-869600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                  node-controller  Node pause-869600 event: Registered Node pause-869600 in Controller
	
	
	==> dmesg <==
	[Sep29 11:35] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000051] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003491] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.187573] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.094279] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.110689] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.100828] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.151195] kauditd_printk_skb: 171 callbacks suppressed
	[  +6.121465] kauditd_printk_skb: 18 callbacks suppressed
	[Sep29 11:36] kauditd_printk_skb: 219 callbacks suppressed
	[ +27.532636] kauditd_printk_skb: 38 callbacks suppressed
	[Sep29 11:37] kauditd_printk_skb: 275 callbacks suppressed
	[  +3.963631] kauditd_printk_skb: 245 callbacks suppressed
	[  +0.675512] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.220239] kauditd_printk_skb: 89 callbacks suppressed
	[  +5.050755] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [423b47a3127215ffbc1582c306e8a879b3aabb224d081d57bc6b2197ae485657] <==
	{"level":"warn","ts":"2025-09-29T11:37:31.856075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:31.868231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:31.877315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:31.885746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:31.893044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:31.903255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:31.914102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:31.919515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:31.929577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:31.938388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:31.947243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:31.959122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:31.965502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:31.973824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:31.981999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:31.995120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:32.004303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:32.013582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:32.023197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:32.031089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:32.041336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:32.051154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:32.060657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:32.068064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:32.145641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38682","server-name":"","error":"EOF"}
	
	
	==> etcd [631a9b239bbb6fd197ae60b88d99e744110391cb1fec84f6dc355431195eed2c] <==
	{"level":"warn","ts":"2025-09-29T11:37:22.029215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:22.039113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:22.057136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:22.074773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:22.082082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:22.091404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:22.136307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54322","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:37:24.310830Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T11:37:24.311055Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-869600","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.21:2380"],"advertise-client-urls":["https://192.168.50.21:2379"]}
	{"level":"error","ts":"2025-09-29T11:37:24.311339Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:37:24.313298Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-09-29T11:37:24.313921Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.21:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:37:24.313936Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.21:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:37:24.313944Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.21:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-29T11:37:24.313507Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:37:24.313527Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6b85f157810fe4ab","current-leader-member-id":"6b85f157810fe4ab"}
	{"level":"warn","ts":"2025-09-29T11:37:24.313749Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:37:24.314075Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:37:24.314085Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:37:24.314097Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-29T11:37:24.314191Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T11:37:24.318141Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.21:2380"}
	{"level":"error","ts":"2025-09-29T11:37:24.318199Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.21:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:37:24.318219Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.21:2380"}
	{"level":"info","ts":"2025-09-29T11:37:24.318225Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-869600","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.21:2380"],"advertise-client-urls":["https://192.168.50.21:2379"]}
	
	
	==> kernel <==
	 11:37:51 up 2 min,  0 users,  load average: 1.43, 0.66, 0.26
	Linux pause-869600 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [97dd7f4f8e1b5ff7f993ece1ee3a6d02b8e0895abffdfbaf28e77b46e69be30d] <==
	I0929 11:37:23.156567       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0929 11:37:23.156585       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I0929 11:37:23.156595       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I0929 11:37:23.156693       1 customresource_discovery_controller.go:332] Shutting down DiscoveryController
	I0929 11:37:23.157095       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0929 11:37:23.157619       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0929 11:37:23.157828       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0929 11:37:23.159939       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0929 11:37:23.160158       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0929 11:37:23.160201       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	I0929 11:37:23.160224       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0929 11:37:23.160284       1 secure_serving.go:259] Stopped listening on [::]:8443
	I0929 11:37:23.160322       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 11:37:23.160423       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0929 11:37:23.160689       1 repairip.go:246] Shutting down ipallocator-repair-controller
	I0929 11:37:23.160868       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0929 11:37:23.161113       1 controller.go:157] Shutting down quota evaluator
	I0929 11:37:23.161141       1 controller.go:176] quota evaluator worker shutdown
	I0929 11:37:23.161370       1 controller.go:176] quota evaluator worker shutdown
	I0929 11:37:23.161614       1 controller.go:176] quota evaluator worker shutdown
	I0929 11:37:23.161623       1 controller.go:176] quota evaluator worker shutdown
	I0929 11:37:23.161661       1 controller.go:176] quota evaluator worker shutdown
	W0929 11:37:23.703793       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0929 11:37:23.705412       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	I0929 11:37:24.200678       1 cidrallocator.go:210] stopping ServiceCIDR Allocator Controller
	
	
	==> kube-apiserver [e91e28a3e927477988f57dfd7561d3d154531e57d8587c2f21b9a49d04b74329] <==
	{"level":"warn","ts":"2025-09-29T11:37:34.168084Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":90,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.171950Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":92,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.194109Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":91,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.200101Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":93,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.218644Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":92,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.226797Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":94,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.242285Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":93,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.251246Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":95,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.267014Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":94,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.274316Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":96,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.290754Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":95,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.301663Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":97,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.314256Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":96,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.327511Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":98,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.341280Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":97,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.352442Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":99,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.368188Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":98,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.395683Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":99,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	I0929 11:37:34.949530       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 11:37:35.039127       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 11:37:35.093132       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 11:37:35.107649       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 11:37:36.579383       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 11:37:36.628699       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 11:37:36.678142       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [dfd71b5df5fc63eed6ab9ed4312f5ac89cb9a39ed215fb2bbe3206f0bd304aa6] <==
	
	
	==> kube-controller-manager [e0d38dfdf3c2a75cee84ad8a30fae04ce48b8aae7ad0b66f632b7e117f79dc7c] <==
	I0929 11:37:36.328941       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 11:37:36.334535       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:37:36.334600       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 11:37:36.334615       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 11:37:36.342140       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:37:36.349771       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:37:36.351000       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0929 11:37:36.352366       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 11:37:36.356975       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 11:37:36.361363       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 11:37:36.364995       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 11:37:36.367501       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 11:37:36.371319       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 11:37:36.371383       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 11:37:36.373162       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 11:37:36.373190       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 11:37:36.373370       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 11:37:36.373203       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 11:37:36.373504       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-869600"
	I0929 11:37:36.373603       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 11:37:36.373848       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 11:37:36.375766       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 11:37:36.376383       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 11:37:36.376475       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 11:37:36.378162       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	
	
	==> kube-proxy [72273cf18a35d4c987fc4338d0cc77370d3c090128a23932571ae87804282ff2] <==
	I0929 11:37:33.733487       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:37:33.834222       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:37:33.834508       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.21"]
	E0929 11:37:33.834672       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:37:33.898407       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 11:37:33.898518       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 11:37:33.898554       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:37:33.917027       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:37:33.917552       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:37:33.917597       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:37:33.925225       1 config.go:200] "Starting service config controller"
	I0929 11:37:33.925949       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:37:33.926017       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:37:33.926054       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:37:33.926077       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:37:33.926091       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:37:33.926728       1 config.go:309] "Starting node config controller"
	I0929 11:37:33.928203       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:37:33.928244       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:37:34.026444       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 11:37:34.026550       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:37:34.026567       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [d57516244a568a42d32547537cecc48aaaf8039cc3d9a1c635898c7ddc4f88db] <==
	I0929 11:37:20.541660       1 server_linux.go:53] "Using iptables proxy"
	I0929 11:37:21.236588       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:37:22.837998       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:37:22.838053       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.21"]
	E0929 11:37:22.838145       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	
	
	==> kube-scheduler [37c3be3aca1bf0fcfc2a3982fb21166d69291bc135dbcd0f54a12f1d73936210] <==
	I0929 11:37:21.559384       1 serving.go:386] Generated self-signed cert in-memory
	W0929 11:37:22.730437       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 11:37:22.731960       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 11:37:22.732083       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 11:37:22.732107       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 11:37:22.864044       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 11:37:22.868053       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0929 11:37:22.868130       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0929 11:37:22.882772       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E0929 11:37:22.882941       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I0929 11:37:22.883034       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:37:22.883069       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0929 11:37:22.883079       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:37:22.883086       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:37:22.883104       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 11:37:22.883131       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 11:37:22.883207       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 11:37:22.883241       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 11:37:22.883246       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 11:37:22.883274       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7a673a11069a02d9f4fda763aaf3e35c3f426ec7c5c8478124ae96f8fdbe8f03] <==
	I0929 11:37:30.221573       1 serving.go:386] Generated self-signed cert in-memory
	W0929 11:37:32.797029       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 11:37:32.797070       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0929 11:37:32.797079       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 11:37:32.797085       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 11:37:32.835197       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 11:37:32.835265       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:37:32.837365       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:37:32.837428       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:37:32.837658       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 11:37:32.837947       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 11:37:32.937604       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 11:37:31 pause-869600 kubelet[4503]: E0929 11:37:31.419109    4503 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-869600\" not found" node="pause-869600"
	Sep 29 11:37:31 pause-869600 kubelet[4503]: E0929 11:37:31.420103    4503 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-869600\" not found" node="pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: I0929 11:37:32.751128    4503 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: E0929 11:37:32.885066    4503 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-869600\" already exists" pod="kube-system/kube-apiserver-pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: I0929 11:37:32.885093    4503 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: E0929 11:37:32.896592    4503 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-869600\" already exists" pod="kube-system/kube-controller-manager-pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: I0929 11:37:32.896745    4503 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: E0929 11:37:32.907025    4503 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-869600\" already exists" pod="kube-system/kube-scheduler-pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: I0929 11:37:32.907642    4503 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: E0929 11:37:32.919731    4503 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-869600\" already exists" pod="kube-system/etcd-pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: I0929 11:37:32.956487    4503 kubelet_node_status.go:124] "Node was previously registered" node="pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: I0929 11:37:32.957227    4503 kubelet_node_status.go:78] "Successfully registered node" node="pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: I0929 11:37:32.957280    4503 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: I0929 11:37:32.960540    4503 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 29 11:37:33 pause-869600 kubelet[4503]: I0929 11:37:33.124225    4503 apiserver.go:52] "Watching apiserver"
	Sep 29 11:37:33 pause-869600 kubelet[4503]: I0929 11:37:33.151592    4503 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 29 11:37:33 pause-869600 kubelet[4503]: I0929 11:37:33.177441    4503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e54a7fe-cc9b-4796-bd74-320f42680285-xtables-lock\") pod \"kube-proxy-7t7c5\" (UID: \"0e54a7fe-cc9b-4796-bd74-320f42680285\") " pod="kube-system/kube-proxy-7t7c5"
	Sep 29 11:37:33 pause-869600 kubelet[4503]: I0929 11:37:33.177497    4503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e54a7fe-cc9b-4796-bd74-320f42680285-lib-modules\") pod \"kube-proxy-7t7c5\" (UID: \"0e54a7fe-cc9b-4796-bd74-320f42680285\") " pod="kube-system/kube-proxy-7t7c5"
	Sep 29 11:37:33 pause-869600 kubelet[4503]: I0929 11:37:33.431527    4503 scope.go:117] "RemoveContainer" containerID="18662ffc7957d82e832fd60a0dd22039d2188d9064d4f00d81fcc63c47edc72a"
	Sep 29 11:37:33 pause-869600 kubelet[4503]: I0929 11:37:33.433183    4503 scope.go:117] "RemoveContainer" containerID="d57516244a568a42d32547537cecc48aaaf8039cc3d9a1c635898c7ddc4f88db"
	Sep 29 11:37:38 pause-869600 kubelet[4503]: E0929 11:37:38.295542    4503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759145858295148878  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 29 11:37:38 pause-869600 kubelet[4503]: E0929 11:37:38.295572    4503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759145858295148878  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 29 11:37:41 pause-869600 kubelet[4503]: I0929 11:37:41.652343    4503 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 29 11:37:48 pause-869600 kubelet[4503]: E0929 11:37:48.298835    4503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759145868298184035  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 29 11:37:48 pause-869600 kubelet[4503]: E0929 11:37:48.298956    4503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759145868298184035  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-869600 -n pause-869600
helpers_test.go:269: (dbg) Run:  kubectl --context pause-869600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-869600 -n pause-869600
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-869600 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-869600 logs -n 25: (3.020189354s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-095431 --schedule 15s                                                                                                                            │ scheduled-stop-095431     │ jenkins │ v1.37.0 │ 29 Sep 25 11:33 UTC │                     │
	│ stop    │ -p scheduled-stop-095431 --schedule 15s                                                                                                                            │ scheduled-stop-095431     │ jenkins │ v1.37.0 │ 29 Sep 25 11:33 UTC │                     │
	│ stop    │ -p scheduled-stop-095431 --cancel-scheduled                                                                                                                        │ scheduled-stop-095431     │ jenkins │ v1.37.0 │ 29 Sep 25 11:33 UTC │ 29 Sep 25 11:33 UTC │
	│ stop    │ -p scheduled-stop-095431 --schedule 15s                                                                                                                            │ scheduled-stop-095431     │ jenkins │ v1.37.0 │ 29 Sep 25 11:34 UTC │                     │
	│ stop    │ -p scheduled-stop-095431 --schedule 15s                                                                                                                            │ scheduled-stop-095431     │ jenkins │ v1.37.0 │ 29 Sep 25 11:34 UTC │                     │
	│ stop    │ -p scheduled-stop-095431 --schedule 15s                                                                                                                            │ scheduled-stop-095431     │ jenkins │ v1.37.0 │ 29 Sep 25 11:34 UTC │ 29 Sep 25 11:34 UTC │
	│ delete  │ -p scheduled-stop-095431                                                                                                                                           │ scheduled-stop-095431     │ jenkins │ v1.37.0 │ 29 Sep 25 11:34 UTC │ 29 Sep 25 11:34 UTC │
	│ start   │ -p force-systemd-env-887444 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                               │ force-systemd-env-887444  │ jenkins │ v1.37.0 │ 29 Sep 25 11:34 UTC │ 29 Sep 25 11:35 UTC │
	│ start   │ -p offline-crio-857340 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ offline-crio-857340       │ jenkins │ v1.37.0 │ 29 Sep 25 11:34 UTC │ 29 Sep 25 11:37 UTC │
	│ start   │ -p pause-869600 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ pause-869600              │ jenkins │ v1.37.0 │ 29 Sep 25 11:34 UTC │ 29 Sep 25 11:36 UTC │
	│ start   │ -p stopped-upgrade-880748 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ stopped-upgrade-880748    │ jenkins │ v1.32.0 │ 29 Sep 25 11:34 UTC │ 29 Sep 25 11:36 UTC │
	│ delete  │ -p force-systemd-env-887444                                                                                                                                        │ force-systemd-env-887444  │ jenkins │ v1.37.0 │ 29 Sep 25 11:35 UTC │ 29 Sep 25 11:35 UTC │
	│ start   │ -p kubernetes-upgrade-197761 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-197761 │ jenkins │ v1.37.0 │ 29 Sep 25 11:35 UTC │ 29 Sep 25 11:36 UTC │
	│ stop    │ stopped-upgrade-880748 stop                                                                                                                                        │ stopped-upgrade-880748    │ jenkins │ v1.32.0 │ 29 Sep 25 11:36 UTC │ 29 Sep 25 11:36 UTC │
	│ start   │ -p stopped-upgrade-880748 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ stopped-upgrade-880748    │ jenkins │ v1.37.0 │ 29 Sep 25 11:36 UTC │ 29 Sep 25 11:37 UTC │
	│ start   │ -p pause-869600 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ pause-869600              │ jenkins │ v1.37.0 │ 29 Sep 25 11:36 UTC │ 29 Sep 25 11:37 UTC │
	│ stop    │ -p kubernetes-upgrade-197761                                                                                                                                       │ kubernetes-upgrade-197761 │ jenkins │ v1.37.0 │ 29 Sep 25 11:36 UTC │ 29 Sep 25 11:36 UTC │
	│ start   │ -p kubernetes-upgrade-197761 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-197761 │ jenkins │ v1.37.0 │ 29 Sep 25 11:36 UTC │ 29 Sep 25 11:37 UTC │
	│ delete  │ -p offline-crio-857340                                                                                                                                             │ offline-crio-857340       │ jenkins │ v1.37.0 │ 29 Sep 25 11:37 UTC │ 29 Sep 25 11:37 UTC │
	│ start   │ -p cert-expiration-415186 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                   │ cert-expiration-415186    │ jenkins │ v1.37.0 │ 29 Sep 25 11:37 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-880748 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ stopped-upgrade-880748    │ jenkins │ v1.37.0 │ 29 Sep 25 11:37 UTC │                     │
	│ delete  │ -p stopped-upgrade-880748                                                                                                                                          │ stopped-upgrade-880748    │ jenkins │ v1.37.0 │ 29 Sep 25 11:37 UTC │ 29 Sep 25 11:37 UTC │
	│ start   │ -p force-systemd-flag-435555 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false              │ force-systemd-flag-435555 │ jenkins │ v1.37.0 │ 29 Sep 25 11:37 UTC │                     │
	│ start   │ -p kubernetes-upgrade-197761 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ kubernetes-upgrade-197761 │ jenkins │ v1.37.0 │ 29 Sep 25 11:37 UTC │                     │
	│ start   │ -p kubernetes-upgrade-197761 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-197761 │ jenkins │ v1.37.0 │ 29 Sep 25 11:37 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:37:41
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:37:41.144335   50225 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:37:41.144617   50225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:37:41.144627   50225 out.go:374] Setting ErrFile to fd 2...
	I0929 11:37:41.144633   50225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:37:41.144856   50225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
	I0929 11:37:41.145308   50225 out.go:368] Setting JSON to false
	I0929 11:37:41.146308   50225 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4806,"bootTime":1759141055,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:37:41.146421   50225 start.go:140] virtualization: kvm guest
	I0929 11:37:41.148691   50225 out.go:179] * [kubernetes-upgrade-197761] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:37:41.149979   50225 notify.go:220] Checking for updates...
	I0929 11:37:41.150018   50225 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 11:37:41.151437   50225 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:37:41.152856   50225 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 11:37:41.154282   50225 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 11:37:41.155326   50225 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:37:41.156624   50225 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:37:41.158155   50225 config.go:182] Loaded profile config "kubernetes-upgrade-197761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:37:41.158582   50225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:37:41.158629   50225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:37:41.173303   50225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35243
	I0929 11:37:41.173915   50225 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:37:41.174502   50225 main.go:141] libmachine: Using API Version  1
	I0929 11:37:41.174541   50225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:37:41.174875   50225 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:37:41.175216   50225 main.go:141] libmachine: (kubernetes-upgrade-197761) Calling .DriverName
	I0929 11:37:41.175544   50225 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:37:41.176028   50225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:37:41.176115   50225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:37:41.189312   50225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33569
	I0929 11:37:41.189746   50225 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:37:41.190136   50225 main.go:141] libmachine: Using API Version  1
	I0929 11:37:41.190159   50225 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:37:41.190547   50225 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:37:41.190704   50225 main.go:141] libmachine: (kubernetes-upgrade-197761) Calling .DriverName
	I0929 11:37:41.224547   50225 out.go:179] * Using the kvm2 driver based on existing profile
	I0929 11:37:41.227527   50225 start.go:304] selected driver: kvm2
	I0929 11:37:41.227546   50225 start.go:924] validating driver "kvm2" against &{Name:kubernetes-upgrade-197761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.0 ClusterName:kubernetes-upgrade-197761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.6 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:37:41.227685   50225 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:37:41.228628   50225 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:37:41.228713   50225 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21657-3816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:37:41.242737   50225 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:37:41.242767   50225 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21657-3816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:37:41.257854   50225 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:37:41.258263   50225 cni.go:84] Creating CNI manager for ""
	I0929 11:37:41.258324   50225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 11:37:41.258399   50225 start.go:348] cluster config:
	{Name:kubernetes-upgrade-197761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubernetes-upgrade-197761 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.6 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:37:41.258507   50225 iso.go:125] acquiring lock: {Name:mk6893cf08d5f5d64906f89556bbcb1c3b23df2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:37:41.260300   50225 out.go:179] * Starting "kubernetes-upgrade-197761" primary control-plane node in "kubernetes-upgrade-197761" cluster
	I0929 11:37:39.503279   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:39.504032   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | no network interface addresses found for domain cert-expiration-415186 (source=lease)
	I0929 11:37:39.504048   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | trying to list again with source=arp
	I0929 11:37:39.504403   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | unable to find current IP address of domain cert-expiration-415186 in network mk-cert-expiration-415186 (interfaces detected: [])
	I0929 11:37:39.504423   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | I0929 11:37:39.504323   49965 retry.go:31] will retry after 4.388688088s: waiting for domain to come up
	W0929 11:37:40.112068   49203 pod_ready.go:104] pod "coredns-66bc5c9577-4jdvs" is not "Ready", error: <nil>
	I0929 11:37:42.108249   49203 pod_ready.go:94] pod "coredns-66bc5c9577-4jdvs" is "Ready"
	I0929 11:37:42.108274   49203 pod_ready.go:86] duration metric: took 6.505505339s for pod "coredns-66bc5c9577-4jdvs" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:37:42.111858   49203 pod_ready.go:83] waiting for pod "etcd-pause-869600" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 11:37:44.120335   49203 pod_ready.go:104] pod "etcd-pause-869600" is not "Ready", error: <nil>
	I0929 11:37:45.531618   49913 start.go:364] duration metric: took 28.11622315s to acquireMachinesLock for "force-systemd-flag-435555"
	I0929 11:37:45.531689   49913 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-435555 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.34.0 ClusterName:force-systemd-flag-435555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 11:37:45.531804   49913 start.go:125] createHost starting for "" (driver="kvm2")
	I0929 11:37:41.261523   50225 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:37:41.261560   50225 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 11:37:41.261580   50225 cache.go:58] Caching tarball of preloaded images
	I0929 11:37:41.261648   50225 preload.go:172] Found /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 11:37:41.261663   50225 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 11:37:41.261759   50225 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kubernetes-upgrade-197761/config.json ...
	I0929 11:37:41.261969   50225 start.go:360] acquireMachinesLock for kubernetes-upgrade-197761: {Name:mk5aa1ba007c5e25969fbfeac9bb0aa5318bfa89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0929 11:37:43.894723   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:43.895468   49611 main.go:141] libmachine: (cert-expiration-415186) found domain IP: 192.168.39.205
	I0929 11:37:43.895494   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has current primary IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:43.895502   49611 main.go:141] libmachine: (cert-expiration-415186) reserving static IP address...
	I0929 11:37:43.895933   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | unable to find host DHCP lease matching {name: "cert-expiration-415186", mac: "52:54:00:0d:1e:1e", ip: "192.168.39.205"} in network mk-cert-expiration-415186
	I0929 11:37:44.149085   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | Getting to WaitForSSH function...
	I0929 11:37:44.149108   49611 main.go:141] libmachine: (cert-expiration-415186) reserved static IP address 192.168.39.205 for domain cert-expiration-415186
	I0929 11:37:44.149123   49611 main.go:141] libmachine: (cert-expiration-415186) waiting for SSH...
	I0929 11:37:44.152648   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.153224   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:44.153251   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.153497   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | Using SSH client type: external
	I0929 11:37:44.153512   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | Using SSH private key: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/cert-expiration-415186/id_rsa (-rw-------)
	I0929 11:37:44.153544   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.205 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21657-3816/.minikube/machines/cert-expiration-415186/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0929 11:37:44.153559   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | About to run SSH command:
	I0929 11:37:44.153569   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | exit 0
	I0929 11:37:44.289213   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | SSH cmd err, output: <nil>: 
	I0929 11:37:44.289533   49611 main.go:141] libmachine: (cert-expiration-415186) domain creation complete
	I0929 11:37:44.289942   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetConfigRaw
	I0929 11:37:44.290689   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .DriverName
	I0929 11:37:44.290925   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .DriverName
	I0929 11:37:44.291123   49611 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0929 11:37:44.291132   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetState
	I0929 11:37:44.292836   49611 main.go:141] libmachine: Detecting operating system of created instance...
	I0929 11:37:44.292846   49611 main.go:141] libmachine: Waiting for SSH to be available...
	I0929 11:37:44.292852   49611 main.go:141] libmachine: Getting to WaitForSSH function...
	I0929 11:37:44.292858   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:44.296539   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.296996   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:44.297028   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.297272   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHPort
	I0929 11:37:44.297509   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:44.297676   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:44.297830   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHUsername
	I0929 11:37:44.298007   49611 main.go:141] libmachine: Using SSH client type: native
	I0929 11:37:44.298224   49611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0929 11:37:44.298229   49611 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0929 11:37:44.398470   49611 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:37:44.398484   49611 main.go:141] libmachine: Detecting the provisioner...
	I0929 11:37:44.398492   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:44.401906   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.402285   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:44.402303   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.402535   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHPort
	I0929 11:37:44.402756   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:44.402902   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:44.403020   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHUsername
	I0929 11:37:44.403151   49611 main.go:141] libmachine: Using SSH client type: native
	I0929 11:37:44.403387   49611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0929 11:37:44.403392   49611 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0929 11:37:44.507774   49611 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0929 11:37:44.507830   49611 main.go:141] libmachine: found compatible host: buildroot
	I0929 11:37:44.507835   49611 main.go:141] libmachine: Provisioning with buildroot...
	I0929 11:37:44.507842   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetMachineName
	I0929 11:37:44.508111   49611 buildroot.go:166] provisioning hostname "cert-expiration-415186"
	I0929 11:37:44.508134   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetMachineName
	I0929 11:37:44.508340   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:44.512340   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.512887   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:44.512933   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.513085   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHPort
	I0929 11:37:44.513265   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:44.513434   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:44.513583   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHUsername
	I0929 11:37:44.513745   49611 main.go:141] libmachine: Using SSH client type: native
	I0929 11:37:44.514036   49611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0929 11:37:44.514046   49611 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-415186 && echo "cert-expiration-415186" | sudo tee /etc/hostname
	I0929 11:37:44.638787   49611 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-415186
	
	I0929 11:37:44.638808   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:44.642513   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.643028   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:44.643058   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.643326   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHPort
	I0929 11:37:44.643535   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:44.643719   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:44.643881   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHUsername
	I0929 11:37:44.644032   49611 main.go:141] libmachine: Using SSH client type: native
	I0929 11:37:44.644219   49611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0929 11:37:44.644231   49611 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-415186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-415186/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-415186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 11:37:44.762691   49611 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:37:44.762710   49611 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21657-3816/.minikube CaCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21657-3816/.minikube}
	I0929 11:37:44.762746   49611 buildroot.go:174] setting up certificates
	I0929 11:37:44.762758   49611 provision.go:84] configureAuth start
	I0929 11:37:44.762768   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetMachineName
	I0929 11:37:44.763121   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetIP
	I0929 11:37:44.765895   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.766218   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:44.766253   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.766498   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:44.769682   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.770072   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:44.770088   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.770283   49611 provision.go:143] copyHostCerts
	I0929 11:37:44.770368   49611 exec_runner.go:144] found /home/jenkins/minikube-integration/21657-3816/.minikube/cert.pem, removing ...
	I0929 11:37:44.770381   49611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21657-3816/.minikube/cert.pem
	I0929 11:37:44.770460   49611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/cert.pem (1123 bytes)
	I0929 11:37:44.770563   49611 exec_runner.go:144] found /home/jenkins/minikube-integration/21657-3816/.minikube/key.pem, removing ...
	I0929 11:37:44.770568   49611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21657-3816/.minikube/key.pem
	I0929 11:37:44.770597   49611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/key.pem (1679 bytes)
	I0929 11:37:44.770645   49611 exec_runner.go:144] found /home/jenkins/minikube-integration/21657-3816/.minikube/ca.pem, removing ...
	I0929 11:37:44.770648   49611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21657-3816/.minikube/ca.pem
	I0929 11:37:44.770670   49611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21657-3816/.minikube/ca.pem (1082 bytes)
	I0929 11:37:44.770711   49611 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-415186 san=[127.0.0.1 192.168.39.205 cert-expiration-415186 localhost minikube]
	I0929 11:37:44.827038   49611 provision.go:177] copyRemoteCerts
	I0929 11:37:44.827093   49611 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 11:37:44.827113   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:44.830433   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.830723   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:44.830746   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:44.830950   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHPort
	I0929 11:37:44.831167   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:44.831305   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHUsername
	I0929 11:37:44.831459   49611 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/cert-expiration-415186/id_rsa Username:docker}
	I0929 11:37:44.914884   49611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 11:37:44.947698   49611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0929 11:37:44.979144   49611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 11:37:45.011907   49611 provision.go:87] duration metric: took 249.136952ms to configureAuth
	I0929 11:37:45.011929   49611 buildroot.go:189] setting minikube options for container-runtime
	I0929 11:37:45.012122   49611 config.go:182] Loaded profile config "cert-expiration-415186": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:37:45.012202   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:45.015495   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.015933   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:45.015962   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.016166   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHPort
	I0929 11:37:45.016367   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:45.016523   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:45.016679   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHUsername
	I0929 11:37:45.016797   49611 main.go:141] libmachine: Using SSH client type: native
	I0929 11:37:45.017002   49611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0929 11:37:45.017016   49611 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 11:37:45.267522   49611 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 11:37:45.267540   49611 main.go:141] libmachine: Checking connection to Docker...
	I0929 11:37:45.267549   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetURL
	I0929 11:37:45.269034   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | using libvirt version 8000000
	I0929 11:37:45.272230   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.272641   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:45.272664   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.272868   49611 main.go:141] libmachine: Docker is up and running!
	I0929 11:37:45.272875   49611 main.go:141] libmachine: Reticulating splines...
	I0929 11:37:45.272881   49611 client.go:171] duration metric: took 22.489300603s to LocalClient.Create
	I0929 11:37:45.272905   49611 start.go:167] duration metric: took 22.489364851s to libmachine.API.Create "cert-expiration-415186"
	I0929 11:37:45.272924   49611 start.go:293] postStartSetup for "cert-expiration-415186" (driver="kvm2")
	I0929 11:37:45.272945   49611 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 11:37:45.272960   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .DriverName
	I0929 11:37:45.273202   49611 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 11:37:45.273218   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:45.275954   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.276303   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:45.276321   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.276641   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHPort
	I0929 11:37:45.276827   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:45.277005   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHUsername
	I0929 11:37:45.277145   49611 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/cert-expiration-415186/id_rsa Username:docker}
	I0929 11:37:45.361606   49611 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 11:37:45.367115   49611 info.go:137] Remote host: Buildroot 2025.02
	I0929 11:37:45.367130   49611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-3816/.minikube/addons for local assets ...
	I0929 11:37:45.367195   49611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-3816/.minikube/files for local assets ...
	I0929 11:37:45.367287   49611 filesync.go:149] local asset: /home/jenkins/minikube-integration/21657-3816/.minikube/files/etc/ssl/certs/76912.pem -> 76912.pem in /etc/ssl/certs
	I0929 11:37:45.367403   49611 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 11:37:45.380447   49611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/files/etc/ssl/certs/76912.pem --> /etc/ssl/certs/76912.pem (1708 bytes)
	I0929 11:37:45.414584   49611 start.go:296] duration metric: took 141.64863ms for postStartSetup
	I0929 11:37:45.414618   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetConfigRaw
	I0929 11:37:45.415346   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetIP
	I0929 11:37:45.418822   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.419253   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:45.419273   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.419590   49611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/cert-expiration-415186/config.json ...
	I0929 11:37:45.419797   49611 start.go:128] duration metric: took 22.658586465s to createHost
	I0929 11:37:45.419813   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:45.422236   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.422768   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:45.422789   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.423016   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHPort
	I0929 11:37:45.423190   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:45.423332   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:45.423491   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHUsername
	I0929 11:37:45.423643   49611 main.go:141] libmachine: Using SSH client type: native
	I0929 11:37:45.423844   49611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0929 11:37:45.423848   49611 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0929 11:37:45.531487   49611 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759145865.496425117
	
	I0929 11:37:45.531498   49611 fix.go:216] guest clock: 1759145865.496425117
	I0929 11:37:45.531514   49611 fix.go:229] Guest: 2025-09-29 11:37:45.496425117 +0000 UTC Remote: 2025-09-29 11:37:45.41980278 +0000 UTC m=+43.576999587 (delta=76.622337ms)
	I0929 11:37:45.531531   49611 fix.go:200] guest clock delta is within tolerance: 76.622337ms
	I0929 11:37:45.531534   49611 start.go:83] releasing machines lock for "cert-expiration-415186", held for 22.770583697s
	I0929 11:37:45.531558   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .DriverName
	I0929 11:37:45.531849   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetIP
	I0929 11:37:45.535441   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.535964   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:45.535989   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.536240   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .DriverName
	I0929 11:37:45.536783   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .DriverName
	I0929 11:37:45.537002   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .DriverName
	I0929 11:37:45.537107   49611 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 11:37:45.537145   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:45.537285   49611 ssh_runner.go:195] Run: cat /version.json
	I0929 11:37:45.537301   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHHostname
	I0929 11:37:45.540833   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.540991   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.541297   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:45.541316   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.541341   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:45.541377   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:45.541524   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHPort
	I0929 11:37:45.541691   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:45.541781   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHPort
	I0929 11:37:45.541880   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHUsername
	I0929 11:37:45.541940   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHKeyPath
	I0929 11:37:45.542014   49611 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/cert-expiration-415186/id_rsa Username:docker}
	I0929 11:37:45.542094   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetSSHUsername
	I0929 11:37:45.542210   49611 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/cert-expiration-415186/id_rsa Username:docker}
	I0929 11:37:45.655252   49611 ssh_runner.go:195] Run: systemctl --version
	I0929 11:37:45.662656   49611 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 11:37:45.833726   49611 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0929 11:37:45.842977   49611 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0929 11:37:45.843031   49611 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:37:45.870533   49611 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0929 11:37:45.870548   49611 start.go:495] detecting cgroup driver to use...
	I0929 11:37:45.870651   49611 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 11:37:45.893748   49611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:37:45.912086   49611 docker.go:218] disabling cri-docker service (if available) ...
	I0929 11:37:45.912143   49611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 11:37:45.930123   49611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 11:37:45.949543   49611 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 11:37:46.104338   49611 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 11:37:46.323335   49611 docker.go:234] disabling docker service ...
	I0929 11:37:46.323411   49611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 11:37:46.341559   49611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 11:37:46.358427   49611 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 11:37:46.525285   49611 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 11:37:46.690963   49611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 11:37:46.708553   49611 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:37:46.740011   49611 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 11:37:46.740059   49611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:46.758488   49611 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 11:37:46.758535   49611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:46.777448   49611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:46.793104   49611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:46.808537   49611 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 11:37:46.824680   49611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:46.837871   49611 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:46.859136   49611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:37:46.873719   49611 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 11:37:45.534409   49913 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0929 11:37:45.534606   49913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:37:45.534652   49913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:37:45.549808   49913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46027
	I0929 11:37:45.550259   49913 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:37:45.550872   49913 main.go:141] libmachine: Using API Version  1
	I0929 11:37:45.550892   49913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:37:45.551374   49913 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:37:45.551643   49913 main.go:141] libmachine: (force-systemd-flag-435555) Calling .GetMachineName
	I0929 11:37:45.551822   49913 main.go:141] libmachine: (force-systemd-flag-435555) Calling .DriverName
	I0929 11:37:45.552023   49913 start.go:159] libmachine.API.Create for "force-systemd-flag-435555" (driver="kvm2")
	I0929 11:37:45.552065   49913 client.go:168] LocalClient.Create starting
	I0929 11:37:45.552104   49913 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21657-3816/.minikube/certs/ca.pem
	I0929 11:37:45.552154   49913 main.go:141] libmachine: Decoding PEM data...
	I0929 11:37:45.552188   49913 main.go:141] libmachine: Parsing certificate...
	I0929 11:37:45.552272   49913 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21657-3816/.minikube/certs/cert.pem
	I0929 11:37:45.552316   49913 main.go:141] libmachine: Decoding PEM data...
	I0929 11:37:45.552332   49913 main.go:141] libmachine: Parsing certificate...
	I0929 11:37:45.552377   49913 main.go:141] libmachine: Running pre-create checks...
	I0929 11:37:45.552390   49913 main.go:141] libmachine: (force-systemd-flag-435555) Calling .PreCreateCheck
	I0929 11:37:45.552773   49913 main.go:141] libmachine: (force-systemd-flag-435555) Calling .GetConfigRaw
	I0929 11:37:45.553401   49913 main.go:141] libmachine: Creating machine...
	I0929 11:37:45.553418   49913 main.go:141] libmachine: (force-systemd-flag-435555) Calling .Create
	I0929 11:37:45.553572   49913 main.go:141] libmachine: (force-systemd-flag-435555) creating domain...
	I0929 11:37:45.553592   49913 main.go:141] libmachine: (force-systemd-flag-435555) creating network...
	I0929 11:37:45.555099   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | found existing default network
	I0929 11:37:45.555244   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | <network connections='3'>
	I0929 11:37:45.555277   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <name>default</name>
	I0929 11:37:45.555296   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0929 11:37:45.555312   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <forward mode='nat'>
	I0929 11:37:45.555321   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <nat>
	I0929 11:37:45.555329   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <port start='1024' end='65535'/>
	I0929 11:37:45.555336   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </nat>
	I0929 11:37:45.555342   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   </forward>
	I0929 11:37:45.555366   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0929 11:37:45.555388   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0929 11:37:45.555401   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0929 11:37:45.555408   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <dhcp>
	I0929 11:37:45.555422   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0929 11:37:45.555429   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </dhcp>
	I0929 11:37:45.555440   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   </ip>
	I0929 11:37:45.555464   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | </network>
	I0929 11:37:45.555474   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | 
	I0929 11:37:45.556537   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:45.556326   50310 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:78:90:ea} reservation:<nil>}
	I0929 11:37:45.557094   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:45.557007   50310 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:c2:17:dc} reservation:<nil>}
	I0929 11:37:45.557884   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:45.557795   50310 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00025ab60}
	I0929 11:37:45.557911   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | defining private network:
	I0929 11:37:45.557923   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | 
	I0929 11:37:45.557935   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | <network>
	I0929 11:37:45.557948   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <name>mk-force-systemd-flag-435555</name>
	I0929 11:37:45.557957   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <dns enable='no'/>
	I0929 11:37:45.557967   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0929 11:37:45.557977   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <dhcp>
	I0929 11:37:45.557987   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0929 11:37:45.558002   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </dhcp>
	I0929 11:37:45.558036   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   </ip>
	I0929 11:37:45.558060   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | </network>
	I0929 11:37:45.558127   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | 
	I0929 11:37:45.563958   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | creating private network mk-force-systemd-flag-435555 192.168.61.0/24...
	I0929 11:37:45.648794   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | private network mk-force-systemd-flag-435555 192.168.61.0/24 created
	I0929 11:37:45.649102   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | <network>
	I0929 11:37:45.649119   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <name>mk-force-systemd-flag-435555</name>
	I0929 11:37:45.649128   49913 main.go:141] libmachine: (force-systemd-flag-435555) setting up store path in /home/jenkins/minikube-integration/21657-3816/.minikube/machines/force-systemd-flag-435555 ...
	I0929 11:37:45.649142   49913 main.go:141] libmachine: (force-systemd-flag-435555) building disk image from file:///home/jenkins/minikube-integration/21657-3816/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0929 11:37:45.649149   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <uuid>7524cf0e-8669-4a69-bfd1-5fc5c400d096</uuid>
	I0929 11:37:45.649157   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <bridge name='virbr3' stp='on' delay='0'/>
	I0929 11:37:45.649170   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <mac address='52:54:00:af:c0:73'/>
	I0929 11:37:45.649181   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <dns enable='no'/>
	I0929 11:37:45.649197   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0929 11:37:45.649220   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <dhcp>
	I0929 11:37:45.649231   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0929 11:37:45.649236   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </dhcp>
	I0929 11:37:45.649243   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   </ip>
	I0929 11:37:45.649252   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | </network>
	I0929 11:37:45.649264   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | 
	I0929 11:37:45.649282   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:45.649137   50310 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 11:37:45.649383   49913 main.go:141] libmachine: (force-systemd-flag-435555) Downloading /home/jenkins/minikube-integration/21657-3816/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21657-3816/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I0929 11:37:45.871901   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:45.871728   50310 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/force-systemd-flag-435555/id_rsa...
	I0929 11:37:45.999118   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:45.998966   50310 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/force-systemd-flag-435555/force-systemd-flag-435555.rawdisk...
	I0929 11:37:45.999154   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | Writing magic tar header
	I0929 11:37:45.999173   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | Writing SSH key tar header
	I0929 11:37:45.999185   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:45.999138   50310 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21657-3816/.minikube/machines/force-systemd-flag-435555 ...
	I0929 11:37:45.999334   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21657-3816/.minikube/machines/force-systemd-flag-435555
	I0929 11:37:45.999382   49913 main.go:141] libmachine: (force-systemd-flag-435555) setting executable bit set on /home/jenkins/minikube-integration/21657-3816/.minikube/machines/force-systemd-flag-435555 (perms=drwx------)
	I0929 11:37:45.999400   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21657-3816/.minikube/machines
	I0929 11:37:45.999421   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 11:37:45.999437   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21657-3816
	I0929 11:37:45.999476   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0929 11:37:45.999498   49913 main.go:141] libmachine: (force-systemd-flag-435555) setting executable bit set on /home/jenkins/minikube-integration/21657-3816/.minikube/machines (perms=drwxr-xr-x)
	I0929 11:37:45.999510   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | checking permissions on dir: /home/jenkins
	I0929 11:37:45.999529   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | checking permissions on dir: /home
	I0929 11:37:45.999541   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | skipping /home - not owner
	I0929 11:37:45.999556   49913 main.go:141] libmachine: (force-systemd-flag-435555) setting executable bit set on /home/jenkins/minikube-integration/21657-3816/.minikube (perms=drwxr-xr-x)
	I0929 11:37:45.999567   49913 main.go:141] libmachine: (force-systemd-flag-435555) setting executable bit set on /home/jenkins/minikube-integration/21657-3816 (perms=drwxrwxr-x)
	I0929 11:37:45.999598   49913 main.go:141] libmachine: (force-systemd-flag-435555) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0929 11:37:45.999628   49913 main.go:141] libmachine: (force-systemd-flag-435555) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0929 11:37:45.999640   49913 main.go:141] libmachine: (force-systemd-flag-435555) defining domain...
	I0929 11:37:46.000820   49913 main.go:141] libmachine: (force-systemd-flag-435555) defining domain using XML: 
	I0929 11:37:46.000842   49913 main.go:141] libmachine: (force-systemd-flag-435555) <domain type='kvm'>
	I0929 11:37:46.000867   49913 main.go:141] libmachine: (force-systemd-flag-435555)   <name>force-systemd-flag-435555</name>
	I0929 11:37:46.000886   49913 main.go:141] libmachine: (force-systemd-flag-435555)   <memory unit='MiB'>3072</memory>
	I0929 11:37:46.000896   49913 main.go:141] libmachine: (force-systemd-flag-435555)   <vcpu>2</vcpu>
	I0929 11:37:46.000906   49913 main.go:141] libmachine: (force-systemd-flag-435555)   <features>
	I0929 11:37:46.000915   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <acpi/>
	I0929 11:37:46.000925   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <apic/>
	I0929 11:37:46.000943   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <pae/>
	I0929 11:37:46.000953   49913 main.go:141] libmachine: (force-systemd-flag-435555)   </features>
	I0929 11:37:46.000998   49913 main.go:141] libmachine: (force-systemd-flag-435555)   <cpu mode='host-passthrough'>
	I0929 11:37:46.001032   49913 main.go:141] libmachine: (force-systemd-flag-435555)   </cpu>
	I0929 11:37:46.001045   49913 main.go:141] libmachine: (force-systemd-flag-435555)   <os>
	I0929 11:37:46.001056   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <type>hvm</type>
	I0929 11:37:46.001065   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <boot dev='cdrom'/>
	I0929 11:37:46.001076   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <boot dev='hd'/>
	I0929 11:37:46.001085   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <bootmenu enable='no'/>
	I0929 11:37:46.001091   49913 main.go:141] libmachine: (force-systemd-flag-435555)   </os>
	I0929 11:37:46.001100   49913 main.go:141] libmachine: (force-systemd-flag-435555)   <devices>
	I0929 11:37:46.001112   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <disk type='file' device='cdrom'>
	I0929 11:37:46.001129   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/force-systemd-flag-435555/boot2docker.iso'/>
	I0929 11:37:46.001139   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <target dev='hdc' bus='scsi'/>
	I0929 11:37:46.001147   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <readonly/>
	I0929 11:37:46.001155   49913 main.go:141] libmachine: (force-systemd-flag-435555)     </disk>
	I0929 11:37:46.001164   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <disk type='file' device='disk'>
	I0929 11:37:46.001177   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0929 11:37:46.001195   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/force-systemd-flag-435555/force-systemd-flag-435555.rawdisk'/>
	I0929 11:37:46.001206   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <target dev='hda' bus='virtio'/>
	I0929 11:37:46.001214   49913 main.go:141] libmachine: (force-systemd-flag-435555)     </disk>
	I0929 11:37:46.001225   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <interface type='network'>
	I0929 11:37:46.001235   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <source network='mk-force-systemd-flag-435555'/>
	I0929 11:37:46.001245   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <model type='virtio'/>
	I0929 11:37:46.001254   49913 main.go:141] libmachine: (force-systemd-flag-435555)     </interface>
	I0929 11:37:46.001265   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <interface type='network'>
	I0929 11:37:46.001273   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <source network='default'/>
	I0929 11:37:46.001285   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <model type='virtio'/>
	I0929 11:37:46.001292   49913 main.go:141] libmachine: (force-systemd-flag-435555)     </interface>
	I0929 11:37:46.001304   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <serial type='pty'>
	I0929 11:37:46.001314   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <target port='0'/>
	I0929 11:37:46.001330   49913 main.go:141] libmachine: (force-systemd-flag-435555)     </serial>
	I0929 11:37:46.001344   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <console type='pty'>
	I0929 11:37:46.001373   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <target type='serial' port='0'/>
	I0929 11:37:46.001392   49913 main.go:141] libmachine: (force-systemd-flag-435555)     </console>
	I0929 11:37:46.001401   49913 main.go:141] libmachine: (force-systemd-flag-435555)     <rng model='virtio'>
	I0929 11:37:46.001413   49913 main.go:141] libmachine: (force-systemd-flag-435555)       <backend model='random'>/dev/random</backend>
	I0929 11:37:46.001421   49913 main.go:141] libmachine: (force-systemd-flag-435555)     </rng>
	I0929 11:37:46.001431   49913 main.go:141] libmachine: (force-systemd-flag-435555)   </devices>
	I0929 11:37:46.001453   49913 main.go:141] libmachine: (force-systemd-flag-435555) </domain>
	I0929 11:37:46.001468   49913 main.go:141] libmachine: (force-systemd-flag-435555) 
	I0929 11:37:46.006699   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | domain force-systemd-flag-435555 has defined MAC address 52:54:00:cc:bc:06 in network default
	I0929 11:37:46.007512   49913 main.go:141] libmachine: (force-systemd-flag-435555) starting domain...
	I0929 11:37:46.007552   49913 main.go:141] libmachine: (force-systemd-flag-435555) ensuring networks are active...
	I0929 11:37:46.007566   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | domain force-systemd-flag-435555 has defined MAC address 52:54:00:ac:a5:58 in network mk-force-systemd-flag-435555
	I0929 11:37:46.008497   49913 main.go:141] libmachine: (force-systemd-flag-435555) Ensuring network default is active
	I0929 11:37:46.008880   49913 main.go:141] libmachine: (force-systemd-flag-435555) Ensuring network mk-force-systemd-flag-435555 is active
	I0929 11:37:46.009882   49913 main.go:141] libmachine: (force-systemd-flag-435555) getting domain XML...
	I0929 11:37:46.011308   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | starting domain XML:
	I0929 11:37:46.011329   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | <domain type='kvm'>
	I0929 11:37:46.011340   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <name>force-systemd-flag-435555</name>
	I0929 11:37:46.011368   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <uuid>69ceb9a2-2011-45f3-a825-e0cef8c12c06</uuid>
	I0929 11:37:46.011384   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <memory unit='KiB'>3145728</memory>
	I0929 11:37:46.011393   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I0929 11:37:46.011430   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <vcpu placement='static'>2</vcpu>
	I0929 11:37:46.011487   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <os>
	I0929 11:37:46.011505   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0929 11:37:46.011516   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <boot dev='cdrom'/>
	I0929 11:37:46.011525   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <boot dev='hd'/>
	I0929 11:37:46.011536   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <bootmenu enable='no'/>
	I0929 11:37:46.011562   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   </os>
	I0929 11:37:46.011587   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <features>
	I0929 11:37:46.011596   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <acpi/>
	I0929 11:37:46.011610   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <apic/>
	I0929 11:37:46.011644   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <pae/>
	I0929 11:37:46.011664   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   </features>
	I0929 11:37:46.011675   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0929 11:37:46.011697   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <clock offset='utc'/>
	I0929 11:37:46.011707   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <on_poweroff>destroy</on_poweroff>
	I0929 11:37:46.011718   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <on_reboot>restart</on_reboot>
	I0929 11:37:46.011776   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <on_crash>destroy</on_crash>
	I0929 11:37:46.011792   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   <devices>
	I0929 11:37:46.011804   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0929 11:37:46.011812   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <disk type='file' device='cdrom'>
	I0929 11:37:46.011823   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <driver name='qemu' type='raw'/>
	I0929 11:37:46.011837   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/force-systemd-flag-435555/boot2docker.iso'/>
	I0929 11:37:46.011848   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <target dev='hdc' bus='scsi'/>
	I0929 11:37:46.011856   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <readonly/>
	I0929 11:37:46.011867   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0929 11:37:46.011874   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </disk>
	I0929 11:37:46.011885   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <disk type='file' device='disk'>
	I0929 11:37:46.011897   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0929 11:37:46.011913   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <source file='/home/jenkins/minikube-integration/21657-3816/.minikube/machines/force-systemd-flag-435555/force-systemd-flag-435555.rawdisk'/>
	I0929 11:37:46.011925   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <target dev='hda' bus='virtio'/>
	I0929 11:37:46.011937   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0929 11:37:46.011948   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </disk>
	I0929 11:37:46.011960   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0929 11:37:46.011986   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0929 11:37:46.012000   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </controller>
	I0929 11:37:46.012013   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0929 11:37:46.012027   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0929 11:37:46.012044   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0929 11:37:46.012057   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </controller>
	I0929 11:37:46.012069   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <interface type='network'>
	I0929 11:37:46.012082   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <mac address='52:54:00:ac:a5:58'/>
	I0929 11:37:46.012095   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <source network='mk-force-systemd-flag-435555'/>
	I0929 11:37:46.012107   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <model type='virtio'/>
	I0929 11:37:46.012124   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0929 11:37:46.012150   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </interface>
	I0929 11:37:46.012162   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <interface type='network'>
	I0929 11:37:46.012176   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <mac address='52:54:00:cc:bc:06'/>
	I0929 11:37:46.012186   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <source network='default'/>
	I0929 11:37:46.012199   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <model type='virtio'/>
	I0929 11:37:46.012215   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0929 11:37:46.012228   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </interface>
	I0929 11:37:46.012240   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <serial type='pty'>
	I0929 11:37:46.012254   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <target type='isa-serial' port='0'>
	I0929 11:37:46.012265   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |         <model name='isa-serial'/>
	I0929 11:37:46.012277   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       </target>
	I0929 11:37:46.012293   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </serial>
	I0929 11:37:46.012306   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <console type='pty'>
	I0929 11:37:46.012322   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <target type='serial' port='0'/>
	I0929 11:37:46.012336   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </console>
	I0929 11:37:46.012347   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <input type='mouse' bus='ps2'/>
	I0929 11:37:46.012396   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <input type='keyboard' bus='ps2'/>
	I0929 11:37:46.012433   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <audio id='1' type='none'/>
	I0929 11:37:46.012455   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <memballoon model='virtio'>
	I0929 11:37:46.012472   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0929 11:37:46.012485   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </memballoon>
	I0929 11:37:46.012495   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     <rng model='virtio'>
	I0929 11:37:46.012507   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <backend model='random'>/dev/random</backend>
	I0929 11:37:46.012517   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0929 11:37:46.012528   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |     </rng>
	I0929 11:37:46.012535   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG |   </devices>
	I0929 11:37:46.012553   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | </domain>
	I0929 11:37:46.012567   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | 
	I0929 11:37:46.886241   49611 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0929 11:37:46.886297   49611 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0929 11:37:46.909620   49611 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 11:37:46.922962   49611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:37:47.095931   49611 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 11:37:47.231158   49611 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 11:37:47.231226   49611 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 11:37:47.237341   49611 start.go:563] Will wait 60s for crictl version
	I0929 11:37:47.237406   49611 ssh_runner.go:195] Run: which crictl
	I0929 11:37:47.242176   49611 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 11:37:47.299122   49611 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0929 11:37:47.299174   49611 ssh_runner.go:195] Run: crio --version
	I0929 11:37:47.339309   49611 ssh_runner.go:195] Run: crio --version
	I0929 11:37:47.377490   49611 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	W0929 11:37:46.618701   49203 pod_ready.go:104] pod "etcd-pause-869600" is not "Ready", error: <nil>
	I0929 11:37:47.620897   49203 pod_ready.go:94] pod "etcd-pause-869600" is "Ready"
	I0929 11:37:47.620930   49203 pod_ready.go:86] duration metric: took 5.509048337s for pod "etcd-pause-869600" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:37:47.624091   49203 pod_ready.go:83] waiting for pod "kube-apiserver-pause-869600" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:37:49.131965   49203 pod_ready.go:94] pod "kube-apiserver-pause-869600" is "Ready"
	I0929 11:37:49.131998   49203 pod_ready.go:86] duration metric: took 1.507874128s for pod "kube-apiserver-pause-869600" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:37:49.135317   49203 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-869600" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:37:49.141190   49203 pod_ready.go:94] pod "kube-controller-manager-pause-869600" is "Ready"
	I0929 11:37:49.141223   49203 pod_ready.go:86] duration metric: took 5.880041ms for pod "kube-controller-manager-pause-869600" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:37:49.143906   49203 pod_ready.go:83] waiting for pod "kube-proxy-7t7c5" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:37:49.150489   49203 pod_ready.go:94] pod "kube-proxy-7t7c5" is "Ready"
	I0929 11:37:49.150522   49203 pod_ready.go:86] duration metric: took 6.575659ms for pod "kube-proxy-7t7c5" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:37:49.216962   49203 pod_ready.go:83] waiting for pod "kube-scheduler-pause-869600" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:37:49.616515   49203 pod_ready.go:94] pod "kube-scheduler-pause-869600" is "Ready"
	I0929 11:37:49.616551   49203 pod_ready.go:86] duration metric: took 399.55305ms for pod "kube-scheduler-pause-869600" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:37:49.616569   49203 pod_ready.go:40] duration metric: took 14.018200469s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:37:49.663339   49203 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 11:37:49.667963   49203 out.go:179] * Done! kubectl is now configured to use "pause-869600" cluster and "default" namespace by default
	I0929 11:37:47.378854   49611 main.go:141] libmachine: (cert-expiration-415186) Calling .GetIP
	I0929 11:37:47.382790   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:47.383240   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:1e:1e", ip: ""} in network mk-cert-expiration-415186: {Iface:virbr1 ExpiryTime:2025-09-29 12:37:40 +0000 UTC Type:0 Mac:52:54:00:0d:1e:1e Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:cert-expiration-415186 Clientid:01:52:54:00:0d:1e:1e}
	I0929 11:37:47.383264   49611 main.go:141] libmachine: (cert-expiration-415186) DBG | domain cert-expiration-415186 has defined IP address 192.168.39.205 and MAC address 52:54:00:0d:1e:1e in network mk-cert-expiration-415186
	I0929 11:37:47.383513   49611 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0929 11:37:47.389074   49611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:37:47.409263   49611 kubeadm.go:875] updating cluster {Name:cert-expiration-415186 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:c
ert-expiration-415186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 11:37:47.409433   49611 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:37:47.409499   49611 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:37:47.450105   49611 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0929 11:37:47.450190   49611 ssh_runner.go:195] Run: which lz4
	I0929 11:37:47.455686   49611 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0929 11:37:47.461299   49611 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0929 11:37:47.461331   49611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0929 11:37:49.291832   49611 crio.go:462] duration metric: took 1.836233049s to copy over tarball
	I0929 11:37:49.291900   49611 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0929 11:37:51.293945   49611 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.002015961s)
	I0929 11:37:51.293973   49611 crio.go:469] duration metric: took 2.002119609s to extract the tarball
	I0929 11:37:51.293982   49611 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0929 11:37:51.350305   49611 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:37:51.403852   49611 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 11:37:51.403867   49611 cache_images.go:85] Images are preloaded, skipping loading
	I0929 11:37:51.403875   49611 kubeadm.go:926] updating node { 192.168.39.205 8443 v1.34.0 crio true true} ...
	I0929 11:37:51.403988   49611 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-415186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:cert-expiration-415186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 11:37:51.404068   49611 ssh_runner.go:195] Run: crio config
	I0929 11:37:51.465107   49611 cni.go:84] Creating CNI manager for ""
	I0929 11:37:51.465120   49611 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 11:37:51.465133   49611 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 11:37:51.465154   49611 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.205 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-415186 NodeName:cert-expiration-415186 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 11:37:51.465278   49611 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-415186"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.205"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.205"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 11:37:51.465347   49611 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 11:37:51.480144   49611 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 11:37:51.480205   49611 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 11:37:51.493273   49611 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0929 11:37:51.519211   49611 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 11:37:51.542339   49611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I0929 11:37:51.570903   49611 ssh_runner.go:195] Run: grep 192.168.39.205	control-plane.minikube.internal$ /etc/hosts
	I0929 11:37:51.575972   49611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.205	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:37:51.595604   49611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:37:51.768865   49611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:37:51.819611   49611 certs.go:68] Setting up /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/cert-expiration-415186 for IP: 192.168.39.205
	I0929 11:37:51.819625   49611 certs.go:194] generating shared ca certs ...
	I0929 11:37:51.819643   49611 certs.go:226] acquiring lock for ca certs: {Name:mk991a8b4541d4c7b4b7bab2e7dfb0450ec66a3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:37:51.819800   49611 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21657-3816/.minikube/ca.key
	I0929 11:37:51.819846   49611 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21657-3816/.minikube/proxy-client-ca.key
	I0929 11:37:51.819870   49611 certs.go:256] generating profile certs ...
	I0929 11:37:51.819942   49611 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/cert-expiration-415186/client.key
	I0929 11:37:51.819962   49611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/cert-expiration-415186/client.crt with IP's: []
	I0929 11:37:47.549648   49913 main.go:141] libmachine: (force-systemd-flag-435555) waiting for domain to start...
	I0929 11:37:47.551031   49913 main.go:141] libmachine: (force-systemd-flag-435555) domain is now running
	I0929 11:37:47.551060   49913 main.go:141] libmachine: (force-systemd-flag-435555) waiting for IP...
	I0929 11:37:47.551924   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | domain force-systemd-flag-435555 has defined MAC address 52:54:00:ac:a5:58 in network mk-force-systemd-flag-435555
	I0929 11:37:47.552521   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | no network interface addresses found for domain force-systemd-flag-435555 (source=lease)
	I0929 11:37:47.552542   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | trying to list again with source=arp
	I0929 11:37:47.553011   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | unable to find current IP address of domain force-systemd-flag-435555 in network mk-force-systemd-flag-435555 (interfaces detected: [])
	I0929 11:37:47.553088   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:47.553024   50310 retry.go:31] will retry after 291.63999ms: waiting for domain to come up
	I0929 11:37:47.846961   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | domain force-systemd-flag-435555 has defined MAC address 52:54:00:ac:a5:58 in network mk-force-systemd-flag-435555
	I0929 11:37:47.847854   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | no network interface addresses found for domain force-systemd-flag-435555 (source=lease)
	I0929 11:37:47.847902   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | trying to list again with source=arp
	I0929 11:37:47.848140   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | unable to find current IP address of domain force-systemd-flag-435555 in network mk-force-systemd-flag-435555 (interfaces detected: [])
	I0929 11:37:47.848179   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:47.848119   50310 retry.go:31] will retry after 326.544211ms: waiting for domain to come up
	I0929 11:37:48.176887   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | domain force-systemd-flag-435555 has defined MAC address 52:54:00:ac:a5:58 in network mk-force-systemd-flag-435555
	I0929 11:37:48.177524   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | no network interface addresses found for domain force-systemd-flag-435555 (source=lease)
	I0929 11:37:48.177554   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | trying to list again with source=arp
	I0929 11:37:48.177937   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | unable to find current IP address of domain force-systemd-flag-435555 in network mk-force-systemd-flag-435555 (interfaces detected: [])
	I0929 11:37:48.178011   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:48.177962   50310 retry.go:31] will retry after 371.041108ms: waiting for domain to come up
	I0929 11:37:48.550625   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | domain force-systemd-flag-435555 has defined MAC address 52:54:00:ac:a5:58 in network mk-force-systemd-flag-435555
	I0929 11:37:48.551425   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | no network interface addresses found for domain force-systemd-flag-435555 (source=lease)
	I0929 11:37:48.551456   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | trying to list again with source=arp
	I0929 11:37:48.551816   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | unable to find current IP address of domain force-systemd-flag-435555 in network mk-force-systemd-flag-435555 (interfaces detected: [])
	I0929 11:37:48.551848   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:48.551773   50310 retry.go:31] will retry after 607.211162ms: waiting for domain to come up
	I0929 11:37:49.160993   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | domain force-systemd-flag-435555 has defined MAC address 52:54:00:ac:a5:58 in network mk-force-systemd-flag-435555
	I0929 11:37:49.161821   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | no network interface addresses found for domain force-systemd-flag-435555 (source=lease)
	I0929 11:37:49.161849   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | trying to list again with source=arp
	I0929 11:37:49.162200   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | unable to find current IP address of domain force-systemd-flag-435555 in network mk-force-systemd-flag-435555 (interfaces detected: [])
	I0929 11:37:49.162249   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:49.162188   50310 retry.go:31] will retry after 507.294203ms: waiting for domain to come up
	I0929 11:37:49.671899   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | domain force-systemd-flag-435555 has defined MAC address 52:54:00:ac:a5:58 in network mk-force-systemd-flag-435555
	I0929 11:37:49.672669   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | no network interface addresses found for domain force-systemd-flag-435555 (source=lease)
	I0929 11:37:49.672698   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | trying to list again with source=arp
	I0929 11:37:49.673061   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | unable to find current IP address of domain force-systemd-flag-435555 in network mk-force-systemd-flag-435555 (interfaces detected: [])
	I0929 11:37:49.673087   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:49.673031   50310 retry.go:31] will retry after 738.164202ms: waiting for domain to come up
	I0929 11:37:50.413389   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | domain force-systemd-flag-435555 has defined MAC address 52:54:00:ac:a5:58 in network mk-force-systemd-flag-435555
	I0929 11:37:50.414210   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | no network interface addresses found for domain force-systemd-flag-435555 (source=lease)
	I0929 11:37:50.414245   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | trying to list again with source=arp
	I0929 11:37:50.414456   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | unable to find current IP address of domain force-systemd-flag-435555 in network mk-force-systemd-flag-435555 (interfaces detected: [])
	I0929 11:37:50.414486   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:50.414334   50310 retry.go:31] will retry after 939.853718ms: waiting for domain to come up
	I0929 11:37:51.356075   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | domain force-systemd-flag-435555 has defined MAC address 52:54:00:ac:a5:58 in network mk-force-systemd-flag-435555
	I0929 11:37:51.356704   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | no network interface addresses found for domain force-systemd-flag-435555 (source=lease)
	I0929 11:37:51.356730   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | trying to list again with source=arp
	I0929 11:37:51.357095   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | unable to find current IP address of domain force-systemd-flag-435555 in network mk-force-systemd-flag-435555 (interfaces detected: [])
	I0929 11:37:51.357144   49913 main.go:141] libmachine: (force-systemd-flag-435555) DBG | I0929 11:37:51.357060   50310 retry.go:31] will retry after 1.151602992s: waiting for domain to come up
	
	
	==> CRI-O <==
	Sep 29 11:37:52 pause-869600 crio[3357]: time="2025-09-29 11:37:52.993452426Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37a8b649-171b-4c0b-84cb-a5be91bcdda0 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:37:52 pause-869600 crio[3357]: time="2025-09-29 11:37:52.993548063Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37a8b649-171b-4c0b-84cb-a5be91bcdda0 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:37:52 pause-869600 crio[3357]: time="2025-09-29 11:37:52.997367824Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1eb74088-711e-450f-aba4-5691f563bb0e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:37:52 pause-869600 crio[3357]: time="2025-09-29 11:37:52.997823907Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759145872997797721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1eb74088-711e-450f-aba4-5691f563bb0e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.000636144Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2c27273-e13a-4de0-87b4-adc7bb23f835 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.000760632Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2c27273-e13a-4de0-87b4-adc7bb23f835 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.001981343Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72273cf18a35d4c987fc4338d0cc77370d3c090128a23932571ae87804282ff2,PodSandboxId:36ef9575d84a1f17efcc5478e9214c8d43b62aa3a9f8a049eda495c1961b65a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759145853459648772,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7t7c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e54a7fe-cc9b-4796-bd74-320f42680285,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78a357841af5c9b31b23c99e9125ca7804819bc640b2d98d750a6ce9a17d9f0c,PodSandboxId:8951e553ffe6d91eaa6e81ef5b3954d16e09b8ded5e723217f74d92bd4045a94,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759145853463868272,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4jdvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770ab4a0-2883-4324-b5a3-49ef080d5362,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0d38dfdf3c2a75cee84ad8a30fae04ce48b8aae7ad0b66f632b7e117f79dc7c,PodSandboxId:d9bda1da2de7d621e6d81e226372c1c4019cf462049b379de030f6f76aaf281f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759145848840432073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2efe53242661b5790267dd184a745f24,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:423b47a3127215ffbc1582c306e8a879b3aabb224d081d57bc6b2197ae485657,PodSandboxId:4280293148dfa2205512dbcca617b4659dbc005e04a4912ecd7cb483adf1041f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94
cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759145848846271910,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badb9000f601fb73a2daae9577e989ca,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91e28a3e927477988f57dfd7561d3d154531e57d8587c2f21b9a49d04b74329,PodSandboxId:0dd61cbeaea56f1ddb0985431adceb50f457497805e16ee9b7e6a236598111f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759145848829853938,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638a8c9a14963fecde6cef6f103917da,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a673a11069a02d9f4fda763aaf3e35c3f426ec7c5c8478124ae96f8fdbe8f03,PodSandboxId:90332d6e995f19c2a1626e9e82ca825eb77789770896bb85bcfe822981d3c90a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:4616
9d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759145848816358482,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f70a65abe9cf0d8fc12a4578e54cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37c3be3aca1bf0fcfc2a3982fb21166d69291bc135dbcd0f54a12f1d73936210,PodSandboxId:90332d6e995f19c2a1626e9e82ca825eb77789770896bb8
5bcfe822981d3c90a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759145839757282226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f70a65abe9cf0d8fc12a4578e54cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfd71b5df5fc63eed6ab9
ed4312f5ac89cb9a39ed215fb2bbe3206f0bd304aa6,PodSandboxId:d9bda1da2de7d621e6d81e226372c1c4019cf462049b379de030f6f76aaf281f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759145839766363598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2efe53242661b5790267dd184a745f24,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d57516244a568a42d32547537cecc48aaaf8039cc3d9a1c635898c7ddc4f88db,PodSandboxId:36ef9575d84a1f17efcc5478e9214c8d43b62aa3a9f8a049eda495c1961b65a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759145839616676685,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7t7c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e54a7fe-cc9b-4796-bd74-320f42680285,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97dd7f4f8e1b5ff7f993ece1ee3a6d02b8e0895abffdfbaf28e77b46e69be30d,PodSandboxId:0dd61cbeaea56f1ddb0985431adceb50f457497805e16ee9b7e6a236598111f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1759145839477261757,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638a8c9a14963fecde6cef6f103917da,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631a9b239bbb6fd197ae60b88d99e744110391cb1fec84f6dc355431195eed2c,PodSandboxId:4280293148dfa2205512dbcca617b4659dbc005e04a4912ecd7cb483adf1041f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759145839439294857,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badb9000f601fb73a2daae9577e989ca,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18662ffc7957d82e832fd60a0dd22039d2188d9064d4f00d81fcc63c47edc72a,PodSandboxId:4eb3b5bad382d68dc1c4a4b327e6bf2b7fcb3901689eb39a701c77688213b467,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759145827344122429,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4jdvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770ab4a0-2883-4324-b5a3-49ef080d5362,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2c27273-e13a-4de0-87b4-adc7bb23f835 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.055174623Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8414d4ea-a6aa-487d-a170-591831e935fc name=/runtime.v1.RuntimeService/Version
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.055715750Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8414d4ea-a6aa-487d-a170-591831e935fc name=/runtime.v1.RuntimeService/Version
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.057627008Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3b68a1d4-a7e0-464d-8c0b-e2e9b2dbe539 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.058169597Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759145873058144884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3b68a1d4-a7e0-464d-8c0b-e2e9b2dbe539 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.058670057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ee9991b-d8db-4399-851a-327b1aea01eb name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.058736955Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ee9991b-d8db-4399-851a-327b1aea01eb name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.059114854Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72273cf18a35d4c987fc4338d0cc77370d3c090128a23932571ae87804282ff2,PodSandboxId:36ef9575d84a1f17efcc5478e9214c8d43b62aa3a9f8a049eda495c1961b65a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759145853459648772,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7t7c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e54a7fe-cc9b-4796-bd74-320f42680285,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78a357841af5c9b31b23c99e9125ca7804819bc640b2d98d750a6ce9a17d9f0c,PodSandboxId:8951e553ffe6d91eaa6e81ef5b3954d16e09b8ded5e723217f74d92bd4045a94,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759145853463868272,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4jdvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770ab4a0-2883-4324-b5a3-49ef080d5362,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0d38dfdf3c2a75cee84ad8a30fae04ce48b8aae7ad0b66f632b7e117f79dc7c,PodSandboxId:d9bda1da2de7d621e6d81e226372c1c4019cf462049b379de030f6f76aaf281f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759145848840432073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2efe53242661b5790267dd184a745f24,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:423b47a3127215ffbc1582c306e8a879b3aabb224d081d57bc6b2197ae485657,PodSandboxId:4280293148dfa2205512dbcca617b4659dbc005e04a4912ecd7cb483adf1041f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94
cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759145848846271910,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badb9000f601fb73a2daae9577e989ca,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91e28a3e927477988f57dfd7561d3d154531e57d8587c2f21b9a49d04b74329,PodSandboxId:0dd61cbeaea56f1ddb0985431adceb50f457497805e16ee9b7e6a236598111f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759145848829853938,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638a8c9a14963fecde6cef6f103917da,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a673a11069a02d9f4fda763aaf3e35c3f426ec7c5c8478124ae96f8fdbe8f03,PodSandboxId:90332d6e995f19c2a1626e9e82ca825eb77789770896bb85bcfe822981d3c90a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:4616
9d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759145848816358482,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f70a65abe9cf0d8fc12a4578e54cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37c3be3aca1bf0fcfc2a3982fb21166d69291bc135dbcd0f54a12f1d73936210,PodSandboxId:90332d6e995f19c2a1626e9e82ca825eb77789770896bb8
5bcfe822981d3c90a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759145839757282226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f70a65abe9cf0d8fc12a4578e54cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfd71b5df5fc63eed6ab9
ed4312f5ac89cb9a39ed215fb2bbe3206f0bd304aa6,PodSandboxId:d9bda1da2de7d621e6d81e226372c1c4019cf462049b379de030f6f76aaf281f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759145839766363598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2efe53242661b5790267dd184a745f24,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d57516244a568a42d32547537cecc48aaaf8039cc3d9a1c635898c7ddc4f88db,PodSandboxId:36ef9575d84a1f17efcc5478e9214c8d43b62aa3a9f8a049eda495c1961b65a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759145839616676685,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7t7c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e54a7fe-cc9b-4796-bd74-320f42680285,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97dd7f4f8e1b5ff7f993ece1ee3a6d02b8e0895abffdfbaf28e77b46e69be30d,PodSandboxId:0dd61cbeaea56f1ddb0985431adceb50f457497805e16ee9b7e6a236598111f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1759145839477261757,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638a8c9a14963fecde6cef6f103917da,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631a9b239bbb6fd197ae60b88d99e744110391cb1fec84f6dc355431195eed2c,PodSandboxId:4280293148dfa2205512dbcca617b4659dbc005e04a4912ecd7cb483adf1041f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759145839439294857,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badb9000f601fb73a2daae9577e989ca,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18662ffc7957d82e832fd60a0dd22039d2188d9064d4f00d81fcc63c47edc72a,PodSandboxId:4eb3b5bad382d68dc1c4a4b327e6bf2b7fcb3901689eb39a701c77688213b467,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759145827344122429,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4jdvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770ab4a0-2883-4324-b5a3-49ef080d5362,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ee9991b-d8db-4399-851a-327b1aea01eb name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.114942423Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47526a91-d387-4783-ae77-fdb05d477256 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.115016164Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47526a91-d387-4783-ae77-fdb05d477256 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.116537383Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b237f5cd-56a6-4277-9e87-8f8aa03387db name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.116996371Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759145873116971903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b237f5cd-56a6-4277-9e87-8f8aa03387db name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.117591451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10efdcec-c45b-4ae3-8b2b-4d0521da2d14 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.117854287Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10efdcec-c45b-4ae3-8b2b-4d0521da2d14 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.118282361Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72273cf18a35d4c987fc4338d0cc77370d3c090128a23932571ae87804282ff2,PodSandboxId:36ef9575d84a1f17efcc5478e9214c8d43b62aa3a9f8a049eda495c1961b65a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759145853459648772,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7t7c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e54a7fe-cc9b-4796-bd74-320f42680285,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78a357841af5c9b31b23c99e9125ca7804819bc640b2d98d750a6ce9a17d9f0c,PodSandboxId:8951e553ffe6d91eaa6e81ef5b3954d16e09b8ded5e723217f74d92bd4045a94,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759145853463868272,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4jdvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770ab4a0-2883-4324-b5a3-49ef080d5362,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0d38dfdf3c2a75cee84ad8a30fae04ce48b8aae7ad0b66f632b7e117f79dc7c,PodSandboxId:d9bda1da2de7d621e6d81e226372c1c4019cf462049b379de030f6f76aaf281f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759145848840432073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2efe53242661b5790267dd184a745f24,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:423b47a3127215ffbc1582c306e8a879b3aabb224d081d57bc6b2197ae485657,PodSandboxId:4280293148dfa2205512dbcca617b4659dbc005e04a4912ecd7cb483adf1041f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94
cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759145848846271910,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badb9000f601fb73a2daae9577e989ca,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91e28a3e927477988f57dfd7561d3d154531e57d8587c2f21b9a49d04b74329,PodSandboxId:0dd61cbeaea56f1ddb0985431adceb50f457497805e16ee9b7e6a236598111f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759145848829853938,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638a8c9a14963fecde6cef6f103917da,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a673a11069a02d9f4fda763aaf3e35c3f426ec7c5c8478124ae96f8fdbe8f03,PodSandboxId:90332d6e995f19c2a1626e9e82ca825eb77789770896bb85bcfe822981d3c90a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:4616
9d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759145848816358482,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f70a65abe9cf0d8fc12a4578e54cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37c3be3aca1bf0fcfc2a3982fb21166d69291bc135dbcd0f54a12f1d73936210,PodSandboxId:90332d6e995f19c2a1626e9e82ca825eb77789770896bb8
5bcfe822981d3c90a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759145839757282226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f70a65abe9cf0d8fc12a4578e54cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfd71b5df5fc63eed6ab9
ed4312f5ac89cb9a39ed215fb2bbe3206f0bd304aa6,PodSandboxId:d9bda1da2de7d621e6d81e226372c1c4019cf462049b379de030f6f76aaf281f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1759145839766363598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2efe53242661b5790267dd184a745f24,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d57516244a568a42d32547537cecc48aaaf8039cc3d9a1c635898c7ddc4f88db,PodSandboxId:36ef9575d84a1f17efcc5478e9214c8d43b62aa3a9f8a049eda495c1961b65a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759145839616676685,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7t7c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e54a7fe-cc9b-4796-bd74-320f42680285,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97dd7f4f8e1b5ff7f993ece1ee3a6d02b8e0895abffdfbaf28e77b46e69be30d,PodSandboxId:0dd61cbeaea56f1ddb0985431adceb50f457497805e16ee9b7e6a236598111f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1759145839477261757,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638a8c9a14963fecde6cef6f103917da,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:631a9b239bbb6fd197ae60b88d99e744110391cb1fec84f6dc355431195eed2c,PodSandboxId:4280293148dfa2205512dbcca617b4659dbc005e04a4912ecd7cb483adf1041f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759145839439294857,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-869600,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badb9000f601fb73a2daae9577e989ca,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18662ffc7957d82e832fd60a0dd22039d2188d9064d4f00d81fcc63c47edc72a,PodSandboxId:4eb3b5bad382d68dc1c4a4b327e6bf2b7fcb3901689eb39a701c77688213b467,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759145827344122429,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4jdvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770ab4a0-2883-4324-b5a3-49ef080d5362,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10efdcec-c45b-4ae3-8b2b-4d0521da2d14 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.295728305Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=fbbc12d3-508d-4691-92e7-1a68ec6f891f name=/runtime.v1.RuntimeService/Version
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.295833746Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fbbc12d3-508d-4691-92e7-1a68ec6f891f name=/runtime.v1.RuntimeService/Version
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.316196878Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=7c6dc16e-05c6-4fe5-8441-21a73d9e68f6 name=/runtime.v1.RuntimeService/Status
	Sep 29 11:37:53 pause-869600 crio[3357]: time="2025-09-29 11:37:53.316303673Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=7c6dc16e-05c6-4fe5-8441-21a73d9e68f6 name=/runtime.v1.RuntimeService/Status
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	78a357841af5c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   20 seconds ago      Running             coredns                   2                   8951e553ffe6d       coredns-66bc5c9577-4jdvs
	72273cf18a35d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   20 seconds ago      Running             kube-proxy                3                   36ef9575d84a1       kube-proxy-7t7c5
	423b47a312721       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   25 seconds ago      Running             etcd                      3                   4280293148dfa       etcd-pause-869600
	e0d38dfdf3c2a       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   25 seconds ago      Running             kube-controller-manager   3                   d9bda1da2de7d       kube-controller-manager-pause-869600
	e91e28a3e9274       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   25 seconds ago      Running             kube-apiserver            3                   0dd61cbeaea56       kube-apiserver-pause-869600
	7a673a11069a0       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   25 seconds ago      Running             kube-scheduler            3                   90332d6e995f1       kube-scheduler-pause-869600
	dfd71b5df5fc6       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   34 seconds ago      Exited              kube-controller-manager   2                   d9bda1da2de7d       kube-controller-manager-pause-869600
	37c3be3aca1bf       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   34 seconds ago      Exited              kube-scheduler            2                   90332d6e995f1       kube-scheduler-pause-869600
	d57516244a568       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   34 seconds ago      Exited              kube-proxy                2                   36ef9575d84a1       kube-proxy-7t7c5
	97dd7f4f8e1b5       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   34 seconds ago      Exited              kube-apiserver            2                   0dd61cbeaea56       kube-apiserver-pause-869600
	631a9b239bbb6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   34 seconds ago      Exited              etcd                      2                   4280293148dfa       etcd-pause-869600
	18662ffc7957d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   46 seconds ago      Exited              coredns                   1                   4eb3b5bad382d       coredns-66bc5c9577-4jdvs
	
	
	==> coredns [18662ffc7957d82e832fd60a0dd22039d2188d9064d4f00d81fcc63c47edc72a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:57736 - 64303 "HINFO IN 1784267252011926155.4111428683321706473. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063623689s
	
	
	==> coredns [78a357841af5c9b31b23c99e9125ca7804819bc640b2d98d750a6ce9a17d9f0c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40788 - 23464 "HINFO IN 3474408110492409780.3150661101563447014. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.079109455s
	
	
	==> describe nodes <==
	Name:               pause-869600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-869600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c703192fb7638284bed1945941837d6f5d9e8170
	                    minikube.k8s.io/name=pause-869600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_35_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:35:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-869600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:37:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:37:32 +0000   Mon, 29 Sep 2025 11:35:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:37:32 +0000   Mon, 29 Sep 2025 11:35:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:37:32 +0000   Mon, 29 Sep 2025 11:35:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:37:32 +0000   Mon, 29 Sep 2025 11:35:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.21
	  Hostname:    pause-869600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	System Info:
	  Machine ID:                 eb4e27945a8d4c53957e1a8a4b7047e8
	  System UUID:                eb4e2794-5a8d-4c53-957e-1a8a4b7047e8
	  Boot ID:                    0d164a3b-aa8a-41a7-8f1b-a9b8bdbb05e2
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-4jdvs                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     116s
	  kube-system                 etcd-pause-869600                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m3s
	  kube-system                 kube-apiserver-pause-869600             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-pause-869600    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-7t7c5                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-pause-869600             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 20s                  kube-proxy       
	  Normal  Starting                 114s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  2m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m7s (x7 over 2m8s)  kubelet          Node pause-869600 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m8s)  kubelet          Node pause-869600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m8s)  kubelet          Node pause-869600 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m1s                 kubelet          Node pause-869600 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m1s                 kubelet          Node pause-869600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s                 kubelet          Node pause-869600 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                2m1s                 kubelet          Node pause-869600 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  2m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           117s                 node-controller  Node pause-869600 event: Registered Node pause-869600 in Controller
	  Normal  Starting                 26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)    kubelet          Node pause-869600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)    kubelet          Node pause-869600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)    kubelet          Node pause-869600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                  node-controller  Node pause-869600 event: Registered Node pause-869600 in Controller
	
	
	==> dmesg <==
	[Sep29 11:35] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000051] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003491] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.187573] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.094279] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.110689] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.100828] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.151195] kauditd_printk_skb: 171 callbacks suppressed
	[  +6.121465] kauditd_printk_skb: 18 callbacks suppressed
	[Sep29 11:36] kauditd_printk_skb: 219 callbacks suppressed
	[ +27.532636] kauditd_printk_skb: 38 callbacks suppressed
	[Sep29 11:37] kauditd_printk_skb: 275 callbacks suppressed
	[  +3.963631] kauditd_printk_skb: 245 callbacks suppressed
	[  +0.675512] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.220239] kauditd_printk_skb: 89 callbacks suppressed
	[  +5.050755] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [423b47a3127215ffbc1582c306e8a879b3aabb224d081d57bc6b2197ae485657] <==
	{"level":"warn","ts":"2025-09-29T11:37:31.995120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:32.004303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:32.013582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:32.023197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:32.031089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:32.041336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:32.051154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:32.060657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:32.068064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:32.145641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38682","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:37:53.517826Z","caller":"traceutil/trace.go:172","msg":"trace[1730105694] linearizableReadLoop","detail":"{readStateIndex:596; appliedIndex:596; }","duration":"205.680154ms","start":"2025-09-29T11:37:53.312121Z","end":"2025-09-29T11:37:53.517801Z","steps":["trace[1730105694] 'read index received'  (duration: 205.67561ms)","trace[1730105694] 'applied index is now lower than readState.Index'  (duration: 3.895µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:37:53.517978Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.834698ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:37:53.517997Z","caller":"traceutil/trace.go:172","msg":"trace[1124565898] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:541; }","duration":"205.877088ms","start":"2025-09-29T11:37:53.312115Z","end":"2025-09-29T11:37:53.517992Z","steps":["trace[1124565898] 'agreement among raft nodes before linearized reading'  (duration: 205.816341ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:37:53.518072Z","caller":"traceutil/trace.go:172","msg":"trace[1726513694] transaction","detail":"{read_only:false; response_revision:542; number_of_response:1; }","duration":"225.59835ms","start":"2025-09-29T11:37:53.292459Z","end":"2025-09-29T11:37:53.518057Z","steps":["trace[1726513694] 'process raft request'  (duration: 225.395811ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:37:54.153603Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"369.286523ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16477432528025782679 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-ufyp5c37of7xaj2g24dayj65ja\" mod_revision:533 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-ufyp5c37of7xaj2g24dayj65ja\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-ufyp5c37of7xaj2g24dayj65ja\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-29T11:37:54.153709Z","caller":"traceutil/trace.go:172","msg":"trace[1045142310] linearizableReadLoop","detail":"{readStateIndex:598; appliedIndex:597; }","duration":"167.423791ms","start":"2025-09-29T11:37:53.986273Z","end":"2025-09-29T11:37:54.153697Z","steps":["trace[1045142310] 'read index received'  (duration: 36.53µs)","trace[1045142310] 'applied index is now lower than readState.Index'  (duration: 167.386319ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T11:37:54.153737Z","caller":"traceutil/trace.go:172","msg":"trace[179020883] transaction","detail":"{read_only:false; response_revision:543; number_of_response:1; }","duration":"535.71155ms","start":"2025-09-29T11:37:53.618008Z","end":"2025-09-29T11:37:54.153719Z","steps":["trace[179020883] 'process raft request'  (duration: 165.660794ms)","trace[179020883] 'compare'  (duration: 369.130418ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:37:54.153759Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"167.484354ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:37:54.153778Z","caller":"traceutil/trace.go:172","msg":"trace[1286291294] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:543; }","duration":"167.50471ms","start":"2025-09-29T11:37:53.986268Z","end":"2025-09-29T11:37:54.153772Z","steps":["trace[1286291294] 'agreement among raft nodes before linearized reading'  (duration: 167.463416ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:37:54.153825Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-29T11:37:53.617975Z","time spent":"535.803206ms","remote":"127.0.0.1:38008","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-ufyp5c37of7xaj2g24dayj65ja\" mod_revision:533 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-ufyp5c37of7xaj2g24dayj65ja\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-ufyp5c37of7xaj2g24dayj65ja\" > >"}
	{"level":"warn","ts":"2025-09-29T11:37:54.555138Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"263.425977ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16477432528025782684 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:64ab99954371899b>","response":"size:39"}
	{"level":"info","ts":"2025-09-29T11:37:54.555235Z","caller":"traceutil/trace.go:172","msg":"trace[1592940673] linearizableReadLoop","detail":"{readStateIndex:599; appliedIndex:598; }","duration":"243.848784ms","start":"2025-09-29T11:37:54.311372Z","end":"2025-09-29T11:37:54.555221Z","steps":["trace[1592940673] 'read index received'  (duration: 38.228µs)","trace[1592940673] 'applied index is now lower than readState.Index'  (duration: 243.809713ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:37:54.555297Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"243.937373ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:37:54.555318Z","caller":"traceutil/trace.go:172","msg":"trace[816202699] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:543; }","duration":"243.968184ms","start":"2025-09-29T11:37:54.311342Z","end":"2025-09-29T11:37:54.555311Z","steps":["trace[816202699] 'agreement among raft nodes before linearized reading'  (duration: 243.9187ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:37:54.555377Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-29T11:37:54.225392Z","time spent":"329.981901ms","remote":"127.0.0.1:54422","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	
	
	==> etcd [631a9b239bbb6fd197ae60b88d99e744110391cb1fec84f6dc355431195eed2c] <==
	{"level":"warn","ts":"2025-09-29T11:37:22.029215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:22.039113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:22.057136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:22.074773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:22.082082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:22.091404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:37:22.136307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54322","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:37:24.310830Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T11:37:24.311055Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-869600","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.21:2380"],"advertise-client-urls":["https://192.168.50.21:2379"]}
	{"level":"error","ts":"2025-09-29T11:37:24.311339Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:37:24.313298Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-09-29T11:37:24.313921Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.21:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:37:24.313936Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.21:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:37:24.313944Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.21:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-29T11:37:24.313507Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:37:24.313527Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6b85f157810fe4ab","current-leader-member-id":"6b85f157810fe4ab"}
	{"level":"warn","ts":"2025-09-29T11:37:24.313749Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:37:24.314075Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:37:24.314085Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:37:24.314097Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-29T11:37:24.314191Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T11:37:24.318141Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.21:2380"}
	{"level":"error","ts":"2025-09-29T11:37:24.318199Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.21:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:37:24.318219Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.21:2380"}
	{"level":"info","ts":"2025-09-29T11:37:24.318225Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-869600","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.21:2380"],"advertise-client-urls":["https://192.168.50.21:2379"]}
	
	
	==> kernel <==
	 11:37:55 up 2 min,  0 users,  load average: 1.48, 0.68, 0.27
	Linux pause-869600 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [97dd7f4f8e1b5ff7f993ece1ee3a6d02b8e0895abffdfbaf28e77b46e69be30d] <==
	I0929 11:37:23.156567       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0929 11:37:23.156585       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I0929 11:37:23.156595       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I0929 11:37:23.156693       1 customresource_discovery_controller.go:332] Shutting down DiscoveryController
	I0929 11:37:23.157095       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0929 11:37:23.157619       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0929 11:37:23.157828       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0929 11:37:23.159939       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0929 11:37:23.160158       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0929 11:37:23.160201       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	I0929 11:37:23.160224       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0929 11:37:23.160284       1 secure_serving.go:259] Stopped listening on [::]:8443
	I0929 11:37:23.160322       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 11:37:23.160423       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0929 11:37:23.160689       1 repairip.go:246] Shutting down ipallocator-repair-controller
	I0929 11:37:23.160868       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0929 11:37:23.161113       1 controller.go:157] Shutting down quota evaluator
	I0929 11:37:23.161141       1 controller.go:176] quota evaluator worker shutdown
	I0929 11:37:23.161370       1 controller.go:176] quota evaluator worker shutdown
	I0929 11:37:23.161614       1 controller.go:176] quota evaluator worker shutdown
	I0929 11:37:23.161623       1 controller.go:176] quota evaluator worker shutdown
	I0929 11:37:23.161661       1 controller.go:176] quota evaluator worker shutdown
	W0929 11:37:23.703793       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0929 11:37:23.705412       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	I0929 11:37:24.200678       1 cidrallocator.go:210] stopping ServiceCIDR Allocator Controller
	
	
	==> kube-apiserver [e91e28a3e927477988f57dfd7561d3d154531e57d8587c2f21b9a49d04b74329] <==
	{"level":"warn","ts":"2025-09-29T11:37:34.168084Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":90,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.171950Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":92,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.194109Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":91,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.200101Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":93,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.218644Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":92,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.226797Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":94,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.242285Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":93,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.251246Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":95,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.267014Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":94,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.274316Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":96,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.290754Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":95,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.301663Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":97,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.314256Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":96,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.327511Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":98,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.341280Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":97,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.352442Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":99,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.368188Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":98,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-29T11:37:34.395683Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00157cf00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":99,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	I0929 11:37:34.949530       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 11:37:35.039127       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 11:37:35.093132       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 11:37:35.107649       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 11:37:36.579383       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 11:37:36.628699       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 11:37:36.678142       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [dfd71b5df5fc63eed6ab9ed4312f5ac89cb9a39ed215fb2bbe3206f0bd304aa6] <==
	
	
	==> kube-controller-manager [e0d38dfdf3c2a75cee84ad8a30fae04ce48b8aae7ad0b66f632b7e117f79dc7c] <==
	I0929 11:37:36.328941       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 11:37:36.334535       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:37:36.334600       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 11:37:36.334615       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 11:37:36.342140       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:37:36.349771       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:37:36.351000       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0929 11:37:36.352366       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 11:37:36.356975       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 11:37:36.361363       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 11:37:36.364995       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 11:37:36.367501       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 11:37:36.371319       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 11:37:36.371383       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 11:37:36.373162       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 11:37:36.373190       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 11:37:36.373370       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 11:37:36.373203       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 11:37:36.373504       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-869600"
	I0929 11:37:36.373603       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 11:37:36.373848       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 11:37:36.375766       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 11:37:36.376383       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 11:37:36.376475       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 11:37:36.378162       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	
	
	==> kube-proxy [72273cf18a35d4c987fc4338d0cc77370d3c090128a23932571ae87804282ff2] <==
	I0929 11:37:33.733487       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:37:33.834222       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:37:33.834508       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.21"]
	E0929 11:37:33.834672       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:37:33.898407       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 11:37:33.898518       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 11:37:33.898554       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:37:33.917027       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:37:33.917552       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:37:33.917597       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:37:33.925225       1 config.go:200] "Starting service config controller"
	I0929 11:37:33.925949       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:37:33.926017       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:37:33.926054       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:37:33.926077       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:37:33.926091       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:37:33.926728       1 config.go:309] "Starting node config controller"
	I0929 11:37:33.928203       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:37:33.928244       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:37:34.026444       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 11:37:34.026550       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:37:34.026567       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [d57516244a568a42d32547537cecc48aaaf8039cc3d9a1c635898c7ddc4f88db] <==
	I0929 11:37:20.541660       1 server_linux.go:53] "Using iptables proxy"
	I0929 11:37:21.236588       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:37:22.837998       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:37:22.838053       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.21"]
	E0929 11:37:22.838145       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	
	
	==> kube-scheduler [37c3be3aca1bf0fcfc2a3982fb21166d69291bc135dbcd0f54a12f1d73936210] <==
	I0929 11:37:21.559384       1 serving.go:386] Generated self-signed cert in-memory
	W0929 11:37:22.730437       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 11:37:22.731960       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 11:37:22.732083       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 11:37:22.732107       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 11:37:22.864044       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 11:37:22.868053       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0929 11:37:22.868130       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0929 11:37:22.882772       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E0929 11:37:22.882941       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I0929 11:37:22.883034       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:37:22.883069       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0929 11:37:22.883079       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:37:22.883086       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:37:22.883104       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 11:37:22.883131       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 11:37:22.883207       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 11:37:22.883241       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 11:37:22.883246       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 11:37:22.883274       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7a673a11069a02d9f4fda763aaf3e35c3f426ec7c5c8478124ae96f8fdbe8f03] <==
	I0929 11:37:30.221573       1 serving.go:386] Generated self-signed cert in-memory
	W0929 11:37:32.797029       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 11:37:32.797070       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0929 11:37:32.797079       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 11:37:32.797085       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 11:37:32.835197       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 11:37:32.835265       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:37:32.837365       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:37:32.837428       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:37:32.837658       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 11:37:32.837947       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 11:37:32.937604       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 11:37:31 pause-869600 kubelet[4503]: E0929 11:37:31.419109    4503 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-869600\" not found" node="pause-869600"
	Sep 29 11:37:31 pause-869600 kubelet[4503]: E0929 11:37:31.420103    4503 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-869600\" not found" node="pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: I0929 11:37:32.751128    4503 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: E0929 11:37:32.885066    4503 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-869600\" already exists" pod="kube-system/kube-apiserver-pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: I0929 11:37:32.885093    4503 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: E0929 11:37:32.896592    4503 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-869600\" already exists" pod="kube-system/kube-controller-manager-pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: I0929 11:37:32.896745    4503 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: E0929 11:37:32.907025    4503 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-869600\" already exists" pod="kube-system/kube-scheduler-pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: I0929 11:37:32.907642    4503 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: E0929 11:37:32.919731    4503 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-869600\" already exists" pod="kube-system/etcd-pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: I0929 11:37:32.956487    4503 kubelet_node_status.go:124] "Node was previously registered" node="pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: I0929 11:37:32.957227    4503 kubelet_node_status.go:78] "Successfully registered node" node="pause-869600"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: I0929 11:37:32.957280    4503 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 29 11:37:32 pause-869600 kubelet[4503]: I0929 11:37:32.960540    4503 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 29 11:37:33 pause-869600 kubelet[4503]: I0929 11:37:33.124225    4503 apiserver.go:52] "Watching apiserver"
	Sep 29 11:37:33 pause-869600 kubelet[4503]: I0929 11:37:33.151592    4503 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 29 11:37:33 pause-869600 kubelet[4503]: I0929 11:37:33.177441    4503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e54a7fe-cc9b-4796-bd74-320f42680285-xtables-lock\") pod \"kube-proxy-7t7c5\" (UID: \"0e54a7fe-cc9b-4796-bd74-320f42680285\") " pod="kube-system/kube-proxy-7t7c5"
	Sep 29 11:37:33 pause-869600 kubelet[4503]: I0929 11:37:33.177497    4503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e54a7fe-cc9b-4796-bd74-320f42680285-lib-modules\") pod \"kube-proxy-7t7c5\" (UID: \"0e54a7fe-cc9b-4796-bd74-320f42680285\") " pod="kube-system/kube-proxy-7t7c5"
	Sep 29 11:37:33 pause-869600 kubelet[4503]: I0929 11:37:33.431527    4503 scope.go:117] "RemoveContainer" containerID="18662ffc7957d82e832fd60a0dd22039d2188d9064d4f00d81fcc63c47edc72a"
	Sep 29 11:37:33 pause-869600 kubelet[4503]: I0929 11:37:33.433183    4503 scope.go:117] "RemoveContainer" containerID="d57516244a568a42d32547537cecc48aaaf8039cc3d9a1c635898c7ddc4f88db"
	Sep 29 11:37:38 pause-869600 kubelet[4503]: E0929 11:37:38.295542    4503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759145858295148878  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 29 11:37:38 pause-869600 kubelet[4503]: E0929 11:37:38.295572    4503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759145858295148878  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 29 11:37:41 pause-869600 kubelet[4503]: I0929 11:37:41.652343    4503 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 29 11:37:48 pause-869600 kubelet[4503]: E0929 11:37:48.298835    4503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759145868298184035  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 29 11:37:48 pause-869600 kubelet[4503]: E0929 11:37:48.298956    4503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759145868298184035  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-869600 -n pause-869600
helpers_test.go:269: (dbg) Run:  kubectl --context pause-869600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (75.96s)

                                                
                                    

Test pass (271/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.82
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.14
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 4.26
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.13
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.63
22 TestOffline 128.68
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 154.76
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 9.5
35 TestAddons/parallel/Registry 71.77
36 TestAddons/parallel/RegistryCreds 0.7
38 TestAddons/parallel/InspektorGadget 6.29
39 TestAddons/parallel/MetricsServer 5.79
42 TestAddons/parallel/Headlamp 80.06
43 TestAddons/parallel/CloudSpanner 6.59
45 TestAddons/parallel/NvidiaDevicePlugin 6.56
46 TestAddons/parallel/Yakd 11.79
48 TestAddons/StoppedEnableDisable 82.78
49 TestCertOptions 63.57
50 TestCertExpiration 285.54
52 TestForceSystemdFlag 73.05
53 TestForceSystemdEnv 44.38
55 TestKVMDriverInstallOrUpdate 1.03
59 TestErrorSpam/setup 40.55
60 TestErrorSpam/start 0.32
61 TestErrorSpam/status 0.75
62 TestErrorSpam/pause 1.65
63 TestErrorSpam/unpause 1.82
64 TestErrorSpam/stop 5.39
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 80.9
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 34.85
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.12
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.39
76 TestFunctional/serial/CacheCmd/cache/add_local 1.5
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
81 TestFunctional/serial/CacheCmd/cache/delete 0.09
82 TestFunctional/serial/MinikubeKubectlCmd 0.1
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 33.93
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.52
87 TestFunctional/serial/LogsFileCmd 1.52
88 TestFunctional/serial/InvalidService 4.68
90 TestFunctional/parallel/ConfigCmd 0.33
92 TestFunctional/parallel/DryRun 0.25
93 TestFunctional/parallel/InternationalLanguage 0.13
94 TestFunctional/parallel/StatusCmd 0.8
99 TestFunctional/parallel/AddonsCmd 0.12
102 TestFunctional/parallel/SSHCmd 0.47
103 TestFunctional/parallel/CpCmd 1.37
105 TestFunctional/parallel/FileSync 0.21
106 TestFunctional/parallel/CertSync 1.4
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.49
114 TestFunctional/parallel/License 0.42
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 0.46
117 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
118 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
119 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
120 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
121 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
122 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
123 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
124 TestFunctional/parallel/ImageCommands/ImageBuild 2.34
125 TestFunctional/parallel/ImageCommands/Setup 0.97
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
136 TestFunctional/parallel/MountCmd/any-port 35.41
137 TestFunctional/parallel/ProfileCmd/profile_list 0.37
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
139 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.46
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.24
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.84
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.64
146 TestFunctional/parallel/MountCmd/specific-port 1.77
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.18
149 TestFunctional/parallel/ServiceCmd/List 1.22
150 TestFunctional/parallel/ServiceCmd/JSONOutput 1.25
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 202.96
162 TestMultiControlPlane/serial/DeployApp 4.89
163 TestMultiControlPlane/serial/PingHostFromPods 1.23
164 TestMultiControlPlane/serial/AddWorkerNode 46.5
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
167 TestMultiControlPlane/serial/CopyFile 12.96
168 TestMultiControlPlane/serial/StopSecondaryNode 84.2
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
170 TestMultiControlPlane/serial/RestartSecondaryNode 37.31
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.93
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 383.02
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.4
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
175 TestMultiControlPlane/serial/StopCluster 251.01
176 TestMultiControlPlane/serial/RestartCluster 120.11
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
178 TestMultiControlPlane/serial/AddSecondaryNode 84.69
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.92
183 TestJSONOutput/start/Command 53.93
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.76
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.66
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 6.95
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.19
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 81.41
215 TestMountStart/serial/StartWithMountFirst 20.86
216 TestMountStart/serial/VerifyMountFirst 0.37
217 TestMountStart/serial/StartWithMountSecond 21.45
218 TestMountStart/serial/VerifyMountSecond 0.38
219 TestMountStart/serial/DeleteFirst 0.73
220 TestMountStart/serial/VerifyMountPostDelete 0.37
221 TestMountStart/serial/Stop 1.37
222 TestMountStart/serial/RestartStopped 20
223 TestMountStart/serial/VerifyMountPostStop 0.38
226 TestMultiNode/serial/FreshStart2Nodes 103.53
227 TestMultiNode/serial/DeployApp2Nodes 4.32
228 TestMultiNode/serial/PingHostFrom2Pods 0.76
229 TestMultiNode/serial/AddNode 42.96
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.59
232 TestMultiNode/serial/CopyFile 7.07
233 TestMultiNode/serial/StopNode 2.46
234 TestMultiNode/serial/StartAfterStop 38.41
235 TestMultiNode/serial/RestartKeepsNodes 303.52
236 TestMultiNode/serial/DeleteNode 2.77
237 TestMultiNode/serial/StopMultiNode 164.29
238 TestMultiNode/serial/RestartMultiNode 87.4
239 TestMultiNode/serial/ValidateNameConflict 42.07
246 TestScheduledStopUnix 115.07
250 TestRunningBinaryUpgrade 82.87
252 TestKubernetesUpgrade 192.93
255 TestStoppedBinaryUpgrade/Setup 0.55
256 TestPause/serial/Start 106.8
257 TestStoppedBinaryUpgrade/Upgrade 141.51
259 TestStoppedBinaryUpgrade/MinikubeLogs 1.16
267 TestNetworkPlugins/group/false 3.05
272 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
273 TestNoKubernetes/serial/StartWithK8s 44.41
281 TestNetworkPlugins/group/auto/Start 108.16
282 TestNoKubernetes/serial/StartWithStopK8s 31.44
283 TestNoKubernetes/serial/Start 24.73
284 TestNetworkPlugins/group/kindnet/Start 96.04
285 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
286 TestNoKubernetes/serial/ProfileList 1.52
287 TestNoKubernetes/serial/Stop 1.36
288 TestNoKubernetes/serial/StartNoArgs 35.54
289 TestNetworkPlugins/group/auto/KubeletFlags 0.24
290 TestNetworkPlugins/group/auto/NetCatPod 12.28
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
292 TestNetworkPlugins/group/calico/Start 73.01
293 TestNetworkPlugins/group/auto/DNS 0.18
294 TestNetworkPlugins/group/auto/Localhost 0.12
295 TestNetworkPlugins/group/auto/HairPin 0.14
296 TestNetworkPlugins/group/custom-flannel/Start 83.42
297 TestNetworkPlugins/group/enable-default-cni/Start 92.74
298 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
299 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
300 TestNetworkPlugins/group/kindnet/NetCatPod 14.29
301 TestNetworkPlugins/group/kindnet/DNS 0.2
302 TestNetworkPlugins/group/kindnet/Localhost 0.17
303 TestNetworkPlugins/group/kindnet/HairPin 0.18
304 TestNetworkPlugins/group/calico/ControllerPod 6.15
305 TestNetworkPlugins/group/calico/KubeletFlags 0.34
306 TestNetworkPlugins/group/calico/NetCatPod 13.02
307 TestNetworkPlugins/group/flannel/Start 78.79
308 TestNetworkPlugins/group/calico/DNS 0.19
309 TestNetworkPlugins/group/calico/Localhost 0.16
310 TestNetworkPlugins/group/calico/HairPin 0.23
311 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
312 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.32
313 TestNetworkPlugins/group/bridge/Start 66.93
314 TestNetworkPlugins/group/custom-flannel/DNS 0.19
315 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
316 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
318 TestStartStop/group/old-k8s-version/serial/FirstStart 105.23
319 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
320 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.31
321 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
322 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
323 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
324 TestNetworkPlugins/group/flannel/ControllerPod 6.01
326 TestStartStop/group/no-preload/serial/FirstStart 106.1
327 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
328 TestNetworkPlugins/group/flannel/NetCatPod 12.28
329 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
330 TestNetworkPlugins/group/bridge/NetCatPod 10.49
331 TestNetworkPlugins/group/flannel/DNS 0.16
332 TestNetworkPlugins/group/flannel/Localhost 0.13
333 TestNetworkPlugins/group/flannel/HairPin 0.13
334 TestNetworkPlugins/group/bridge/DNS 0.14
335 TestNetworkPlugins/group/bridge/Localhost 0.12
336 TestNetworkPlugins/group/bridge/HairPin 0.17
338 TestStartStop/group/embed-certs/serial/FirstStart 86.38
340 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 108.66
341 TestStartStop/group/old-k8s-version/serial/DeployApp 10.44
342 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.41
343 TestStartStop/group/old-k8s-version/serial/Stop 80.66
344 TestStartStop/group/no-preload/serial/DeployApp 8.28
345 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.01
346 TestStartStop/group/no-preload/serial/Stop 84.04
347 TestStartStop/group/embed-certs/serial/DeployApp 9.27
348 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.03
349 TestStartStop/group/embed-certs/serial/Stop 82.15
350 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.27
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1
352 TestStartStop/group/default-k8s-diff-port/serial/Stop 83.37
353 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
354 TestStartStop/group/old-k8s-version/serial/SecondStart 46.04
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
356 TestStartStop/group/no-preload/serial/SecondStart 61.53
357 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 12.01
358 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
359 TestStartStop/group/embed-certs/serial/SecondStart 52.28
360 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
361 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
362 TestStartStop/group/old-k8s-version/serial/Pause 3.15
364 TestStartStop/group/newest-cni/serial/FirstStart 59.63
365 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
366 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 66.25
367 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
368 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.01
369 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
370 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
371 TestStartStop/group/no-preload/serial/Pause 3.34
372 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
373 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
374 TestStartStop/group/embed-certs/serial/Pause 3.85
375 TestStartStop/group/newest-cni/serial/DeployApp 0
376 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.48
377 TestStartStop/group/newest-cni/serial/Stop 11.04
378 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
379 TestStartStop/group/newest-cni/serial/SecondStart 35.41
380 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.01
381 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
382 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
383 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.43
384 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
385 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
387 TestStartStop/group/newest-cni/serial/Pause 2.53
x
+
TestDownloadOnly/v1.28.0/json-events (7.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-910458 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-910458 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (7.818891781s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0929 10:19:43.787182    7691 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0929 10:19:43.787282    7691 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-910458
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-910458: exit status 85 (58.050204ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-910458 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-910458 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:19:36
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:19:36.008238    7703 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:19:36.008319    7703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:19:36.008323    7703 out.go:374] Setting ErrFile to fd 2...
	I0929 10:19:36.008327    7703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:19:36.008577    7703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
	W0929 10:19:36.008700    7703 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21657-3816/.minikube/config/config.json: open /home/jenkins/minikube-integration/21657-3816/.minikube/config/config.json: no such file or directory
	I0929 10:19:36.009153    7703 out.go:368] Setting JSON to true
	I0929 10:19:36.010050    7703 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":121,"bootTime":1759141055,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:19:36.010133    7703 start.go:140] virtualization: kvm guest
	I0929 10:19:36.012092    7703 out.go:99] [download-only-910458] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0929 10:19:36.012225    7703 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball: no such file or directory
	I0929 10:19:36.012263    7703 notify.go:220] Checking for updates...
	I0929 10:19:36.013395    7703 out.go:171] MINIKUBE_LOCATION=21657
	I0929 10:19:36.014798    7703 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:19:36.015960    7703 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 10:19:36.017063    7703 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 10:19:36.018199    7703 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 10:19:36.020298    7703 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 10:19:36.020627    7703 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:19:36.515404    7703 out.go:99] Using the kvm2 driver based on user configuration
	I0929 10:19:36.515438    7703 start.go:304] selected driver: kvm2
	I0929 10:19:36.515447    7703 start.go:924] validating driver "kvm2" against <nil>
	I0929 10:19:36.515775    7703 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:19:36.515895    7703 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21657-3816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 10:19:36.530892    7703 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 10:19:36.530921    7703 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21657-3816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 10:19:36.543600    7703 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 10:19:36.543635    7703 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:19:36.544181    7703 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I0929 10:19:36.544380    7703 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 10:19:36.544420    7703 cni.go:84] Creating CNI manager for ""
	I0929 10:19:36.544476    7703 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 10:19:36.544489    7703 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 10:19:36.544554    7703 start.go:348] cluster config:
	{Name:download-only-910458 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-910458 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:19:36.544764    7703 iso.go:125] acquiring lock: {Name:mk6893cf08d5f5d64906f89556bbcb1c3b23df2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:19:36.546666    7703 out.go:99] Downloading VM boot image ...
	I0929 10:19:36.546704    7703 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21657-3816/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0929 10:19:39.276812    7703 out.go:99] Starting "download-only-910458" primary control-plane node in "download-only-910458" cluster
	I0929 10:19:39.276843    7703 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0929 10:19:39.303757    7703 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0929 10:19:39.303786    7703 cache.go:58] Caching tarball of preloaded images
	I0929 10:19:39.303946    7703 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0929 10:19:39.305924    7703 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0929 10:19:39.305945    7703 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 10:19:39.339467    7703 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-910458 host does not exist
	  To start a cluster, run: "minikube start -p download-only-910458"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-910458
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (4.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-452531 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-452531 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (4.259953435s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (4.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0929 10:19:48.373807    7691 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0929 10:19:48.373848    7691 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21657-3816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-452531
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-452531: exit status 85 (56.179007ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-910458 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-910458 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ delete  │ -p download-only-910458                                                                                                                                                                             │ download-only-910458 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ start   │ -o=json --download-only -p download-only-452531 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-452531 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:19:44
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:19:44.151153    7924 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:19:44.151399    7924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:19:44.151408    7924 out.go:374] Setting ErrFile to fd 2...
	I0929 10:19:44.151412    7924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:19:44.151596    7924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
	I0929 10:19:44.152040    7924 out.go:368] Setting JSON to true
	I0929 10:19:44.152889    7924 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":129,"bootTime":1759141055,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:19:44.152969    7924 start.go:140] virtualization: kvm guest
	I0929 10:19:44.154780    7924 out.go:99] [download-only-452531] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:19:44.154937    7924 notify.go:220] Checking for updates...
	I0929 10:19:44.156309    7924 out.go:171] MINIKUBE_LOCATION=21657
	I0929 10:19:44.157742    7924 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:19:44.158822    7924 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 10:19:44.160078    7924 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 10:19:44.161184    7924 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-452531 host does not exist
	  To start a cluster, run: "minikube start -p download-only-452531"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-452531
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I0929 10:19:48.933852    7691 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-757361 --alsologtostderr --binary-mirror http://127.0.0.1:43621 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-757361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-757361
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (128.68s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-857340 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-857340 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2m7.826365395s)
helpers_test.go:175: Cleaning up "offline-crio-857340" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-857340
--- PASS: TestOffline (128.68s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-911532
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-911532: exit status 85 (59.715815ms)

                                                
                                                
-- stdout --
	* Profile "addons-911532" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-911532"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-911532
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-911532: exit status 85 (52.512953ms)

                                                
                                                
-- stdout --
	* Profile "addons-911532" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-911532"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (154.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-911532 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-911532 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m34.75849935s)
--- PASS: TestAddons/Setup (154.76s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-911532 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-911532 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.5s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-911532 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-911532 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [50aa0ab4-8b35-4c2d-a178-4efae92e01df] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [50aa0ab4-8b35-4c2d-a178-4efae92e01df] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004250278s
addons_test.go:694: (dbg) Run:  kubectl --context addons-911532 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-911532 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-911532 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.50s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.00157ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-jqjcd" [0c88f6a7-9d7a-40eb-a93a-59bc1e285db9] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004707469s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-2jwvb" [79fc320c-8be7-4196-9a5d-2c15ae47e503] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004391443s
addons_test.go:392: (dbg) Run:  kubectl --context addons-911532 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-911532 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-911532 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (59.927439096s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-911532 ip
2025/09/29 10:23:53 [DEBUG] GET http://192.168.39.179:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-911532 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (71.77s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.7s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.939131ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-911532
addons_test.go:332: (dbg) Run:  kubectl --context addons-911532 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-911532 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.70s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-tp4c9" [b33b4eee-87ed-427c-97fe-684dc1a39dc1] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005108956s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-911532 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.734747ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-c25dl" [6e7da679-c6f1-46e2-9b63-41ed0241a079] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004633883s
addons_test.go:463: (dbg) Run:  kubectl --context addons-911532 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-911532 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.79s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (80.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-911532 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-911532 --alsologtostderr -v=1: (1.215678107s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-vqctr" [07d48962-bad5-4e4e-ba41-72786899b523] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-vqctr" [07d48962-bad5-4e4e-ba41-72786899b523] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-vqctr" [07d48962-bad5-4e4e-ba41-72786899b523] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 1m13.004460146s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-911532 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-911532 addons disable headlamp --alsologtostderr -v=1: (5.837438484s)
--- PASS: TestAddons/parallel/Headlamp (80.06s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-2jxhl" [b210bb25-ec2b-404a-9fb1-1333b05116e4] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003629385s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-911532 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-f6jdr" [4ec65e75-eb10-4514-befa-234528f55822] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004470201s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-911532 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-vwqlf" [7e163454-56d0-430b-a9e6-7d9187ed3061] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003982926s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-911532 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-911532 addons disable yakd --alsologtostderr -v=1: (5.786105812s)
--- PASS: TestAddons/parallel/Yakd (11.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (82.78s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-911532
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-911532: (1m22.515001336s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-911532
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-911532
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-911532
--- PASS: TestAddons/StoppedEnableDisable (82.78s)

                                                
                                    
x
+
TestCertOptions (63.57s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-424773 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-424773 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m2.134220195s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-424773 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-424773 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-424773 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-424773" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-424773
--- PASS: TestCertOptions (63.57s)

                                                
                                    
x
+
TestCertExpiration (285.54s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-415186 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 11:37:14.613074    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-415186 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m7.196805678s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-415186 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-415186 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.394430876s)
helpers_test.go:175: Cleaning up "cert-expiration-415186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-415186
--- PASS: TestCertExpiration (285.54s)

                                                
                                    
x
+
TestForceSystemdFlag (73.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-435555 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 11:37:25.042629    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-435555 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m11.976893658s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-435555 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-435555" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-435555
--- PASS: TestForceSystemdFlag (73.05s)

                                                
                                    
x
+
TestForceSystemdEnv (44.38s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-887444 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-887444 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.507374526s)
helpers_test.go:175: Cleaning up "force-systemd-env-887444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-887444
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-887444: (1.86761541s)
--- PASS: TestForceSystemdEnv (44.38s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.03s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0929 11:39:00.785966    7691 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0929 11:39:00.786101    7691 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1624437268/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 11:39:00.815878    7691 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1624437268/001/docker-machine-driver-kvm2 version is 1.1.1
W0929 11:39:00.815915    7691 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W0929 11:39:00.816040    7691 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0929 11:39:00.816086    7691 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1624437268/001/docker-machine-driver-kvm2
I0929 11:39:01.678337    7691 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1624437268/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 11:39:01.697394    7691 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1624437268/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (1.03s)

                                                
                                    
x
+
TestErrorSpam/setup (40.55s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-678085 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-678085 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-678085 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-678085 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.547211521s)
--- PASS: TestErrorSpam/setup (40.55s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678085 --log_dir /tmp/nospam-678085 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678085 --log_dir /tmp/nospam-678085 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678085 --log_dir /tmp/nospam-678085 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678085 --log_dir /tmp/nospam-678085 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678085 --log_dir /tmp/nospam-678085 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678085 --log_dir /tmp/nospam-678085 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678085 --log_dir /tmp/nospam-678085 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678085 --log_dir /tmp/nospam-678085 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678085 --log_dir /tmp/nospam-678085 pause
--- PASS: TestErrorSpam/pause (1.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678085 --log_dir /tmp/nospam-678085 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678085 --log_dir /tmp/nospam-678085 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678085 --log_dir /tmp/nospam-678085 unpause
--- PASS: TestErrorSpam/unpause (1.82s)

                                                
                                    
x
+
TestErrorSpam/stop (5.39s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678085 --log_dir /tmp/nospam-678085 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-678085 --log_dir /tmp/nospam-678085 stop: (2.083170938s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678085 --log_dir /tmp/nospam-678085 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-678085 --log_dir /tmp/nospam-678085 stop: (1.812171945s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678085 --log_dir /tmp/nospam-678085 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-678085 --log_dir /tmp/nospam-678085 stop: (1.491318791s)
--- PASS: TestErrorSpam/stop (5.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21657-3816/.minikube/files/etc/test/nested/copy/7691/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.9s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-960153 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-960153 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m20.899486196s)
--- PASS: TestFunctional/serial/StartWithProxy (80.90s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.85s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0929 10:35:50.152388    7691 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-960153 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-960153 --alsologtostderr -v=8: (34.850218228s)
functional_test.go:678: soft start took 34.850951089s for "functional-960153" cluster.
I0929 10:36:25.002992    7691 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (34.85s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-960153 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-960153 cache add registry.k8s.io/pause:3.1: (1.098674447s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-960153 cache add registry.k8s.io/pause:3.3: (1.123058469s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-960153 cache add registry.k8s.io/pause:latest: (1.171949951s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-960153 /tmp/TestFunctionalserialCacheCmdcacheadd_local820615497/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 cache add minikube-local-cache-test:functional-960153
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-960153 cache add minikube-local-cache-test:functional-960153: (1.15014742s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 cache delete minikube-local-cache-test:functional-960153
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-960153
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960153 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (213.466028ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 kubectl -- --context functional-960153 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-960153 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.93s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-960153 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-960153 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.934240637s)
functional_test.go:776: restart took 33.934393245s for "functional-960153" cluster.
I0929 10:37:06.291838    7691 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (33.93s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-960153 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-960153 logs: (1.522939739s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 logs --file /tmp/TestFunctionalserialLogsFileCmd2663921262/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-960153 logs --file /tmp/TestFunctionalserialLogsFileCmd2663921262/001/logs.txt: (1.51436936s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.68s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-960153 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-960153
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-960153: exit status 115 (287.971139ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.210:31604 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-960153 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-960153 delete -f testdata/invalidsvc.yaml: (1.20315568s)
--- PASS: TestFunctional/serial/InvalidService (4.68s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960153 config get cpus: exit status 14 (47.966578ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960153 config get cpus: exit status 14 (51.565538ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-960153 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-960153 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (124.241619ms)

                                                
                                                
-- stdout --
	* [functional-960153] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21657
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 10:37:23.501919   20245 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:37:23.502023   20245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:37:23.502035   20245 out.go:374] Setting ErrFile to fd 2...
	I0929 10:37:23.502039   20245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:37:23.502196   20245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
	I0929 10:37:23.502666   20245 out.go:368] Setting JSON to false
	I0929 10:37:23.503628   20245 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1188,"bootTime":1759141055,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:37:23.503715   20245 start.go:140] virtualization: kvm guest
	I0929 10:37:23.505800   20245 out.go:179] * [functional-960153] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:37:23.507170   20245 notify.go:220] Checking for updates...
	I0929 10:37:23.507184   20245 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 10:37:23.508535   20245 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:37:23.509910   20245 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 10:37:23.511098   20245 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 10:37:23.512299   20245 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:37:23.513429   20245 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:37:23.514886   20245 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:37:23.515281   20245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:37:23.515332   20245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:37:23.529127   20245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34445
	I0929 10:37:23.529722   20245 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:37:23.530278   20245 main.go:141] libmachine: Using API Version  1
	I0929 10:37:23.530304   20245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:37:23.530671   20245 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:37:23.530871   20245 main.go:141] libmachine: (functional-960153) Calling .DriverName
	I0929 10:37:23.531108   20245 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:37:23.531421   20245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:37:23.531455   20245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:37:23.545436   20245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43825
	I0929 10:37:23.545884   20245 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:37:23.546409   20245 main.go:141] libmachine: Using API Version  1
	I0929 10:37:23.546429   20245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:37:23.546869   20245 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:37:23.547045   20245 main.go:141] libmachine: (functional-960153) Calling .DriverName
	I0929 10:37:23.577523   20245 out.go:179] * Using the kvm2 driver based on existing profile
	I0929 10:37:23.578743   20245 start.go:304] selected driver: kvm2
	I0929 10:37:23.578755   20245 start.go:924] validating driver "kvm2" against &{Name:functional-960153 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:functional-960153 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:37:23.578871   20245 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:37:23.581053   20245 out.go:203] 
	W0929 10:37:23.582212   20245 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0929 10:37:23.583362   20245 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-960153 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-960153 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-960153 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (128.24119ms)

                                                
                                                
-- stdout --
	* [functional-960153] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21657
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 10:37:23.259141   20183 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:37:23.259229   20183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:37:23.259239   20183 out.go:374] Setting ErrFile to fd 2...
	I0929 10:37:23.259245   20183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:37:23.259553   20183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
	I0929 10:37:23.259992   20183 out.go:368] Setting JSON to false
	I0929 10:37:23.260941   20183 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1188,"bootTime":1759141055,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:37:23.261040   20183 start.go:140] virtualization: kvm guest
	I0929 10:37:23.263143   20183 out.go:179] * [functional-960153] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 10:37:23.264517   20183 notify.go:220] Checking for updates...
	I0929 10:37:23.264526   20183 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 10:37:23.265748   20183 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:37:23.267263   20183 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 10:37:23.268289   20183 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 10:37:23.269453   20183 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:37:23.270934   20183 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:37:23.273077   20183 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:37:23.273577   20183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:37:23.273636   20183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:37:23.287969   20183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35525
	I0929 10:37:23.288557   20183 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:37:23.289147   20183 main.go:141] libmachine: Using API Version  1
	I0929 10:37:23.289161   20183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:37:23.289562   20183 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:37:23.289796   20183 main.go:141] libmachine: (functional-960153) Calling .DriverName
	I0929 10:37:23.290076   20183 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:37:23.290434   20183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:37:23.290483   20183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:37:23.303750   20183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38929
	I0929 10:37:23.304244   20183 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:37:23.304688   20183 main.go:141] libmachine: Using API Version  1
	I0929 10:37:23.304710   20183 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:37:23.305051   20183 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:37:23.305229   20183 main.go:141] libmachine: (functional-960153) Calling .DriverName
	I0929 10:37:23.335540   20183 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I0929 10:37:23.336801   20183 start.go:304] selected driver: kvm2
	I0929 10:37:23.336816   20183 start.go:924] validating driver "kvm2" against &{Name:functional-960153 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:functional-960153 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:37:23.336908   20183 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:37:23.338882   20183 out.go:203] 
	W0929 10:37:23.340392   20183 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 10:37:23.341505   20183 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh -n functional-960153 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 cp functional-960153:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3849588332/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh -n functional-960153 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh -n functional-960153 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/7691/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "sudo cat /etc/test/nested/copy/7691/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/7691.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "sudo cat /etc/ssl/certs/7691.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/7691.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "sudo cat /usr/share/ca-certificates/7691.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/76912.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "sudo cat /etc/ssl/certs/76912.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/76912.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "sudo cat /usr/share/ca-certificates/76912.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-960153 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960153 ssh "sudo systemctl is-active docker": exit status 1 (253.167599ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960153 ssh "sudo systemctl is-active containerd": exit status 1 (237.362731ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-960153 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-960153
localhost/kicbase/echo-server:functional-960153
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-960153 image ls --format short --alsologtostderr:
I0929 10:43:25.348637   22390 out.go:360] Setting OutFile to fd 1 ...
I0929 10:43:25.348890   22390 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:43:25.348901   22390 out.go:374] Setting ErrFile to fd 2...
I0929 10:43:25.348907   22390 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:43:25.349089   22390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
I0929 10:43:25.349670   22390 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:43:25.349792   22390 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:43:25.350190   22390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:43:25.350235   22390 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:43:25.363369   22390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36421
I0929 10:43:25.363819   22390 main.go:141] libmachine: () Calling .GetVersion
I0929 10:43:25.364437   22390 main.go:141] libmachine: Using API Version  1
I0929 10:43:25.364465   22390 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:43:25.364802   22390 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:43:25.365009   22390 main.go:141] libmachine: (functional-960153) Calling .GetState
I0929 10:43:25.366755   22390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:43:25.366795   22390 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:43:25.379409   22390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40667
I0929 10:43:25.379754   22390 main.go:141] libmachine: () Calling .GetVersion
I0929 10:43:25.380144   22390 main.go:141] libmachine: Using API Version  1
I0929 10:43:25.380167   22390 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:43:25.380557   22390 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:43:25.380768   22390 main.go:141] libmachine: (functional-960153) Calling .DriverName
I0929 10:43:25.381027   22390 ssh_runner.go:195] Run: systemctl --version
I0929 10:43:25.381053   22390 main.go:141] libmachine: (functional-960153) Calling .GetSSHHostname
I0929 10:43:25.384417   22390 main.go:141] libmachine: (functional-960153) DBG | domain functional-960153 has defined MAC address 52:54:00:7e:92:06 in network mk-functional-960153
I0929 10:43:25.384842   22390 main.go:141] libmachine: (functional-960153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:92:06", ip: ""} in network mk-functional-960153: {Iface:virbr1 ExpiryTime:2025-09-29 11:34:45 +0000 UTC Type:0 Mac:52:54:00:7e:92:06 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:functional-960153 Clientid:01:52:54:00:7e:92:06}
I0929 10:43:25.384879   22390 main.go:141] libmachine: (functional-960153) DBG | domain functional-960153 has defined IP address 192.168.39.210 and MAC address 52:54:00:7e:92:06 in network mk-functional-960153
I0929 10:43:25.385026   22390 main.go:141] libmachine: (functional-960153) Calling .GetSSHPort
I0929 10:43:25.385188   22390 main.go:141] libmachine: (functional-960153) Calling .GetSSHKeyPath
I0929 10:43:25.385344   22390 main.go:141] libmachine: (functional-960153) Calling .GetSSHUsername
I0929 10:43:25.385519   22390 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/functional-960153/id_rsa Username:docker}
I0929 10:43:25.467943   22390 ssh_runner.go:195] Run: sudo crictl images --output json
I0929 10:43:25.510474   22390 main.go:141] libmachine: Making call to close driver server
I0929 10:43:25.510492   22390 main.go:141] libmachine: (functional-960153) Calling .Close
I0929 10:43:25.510763   22390 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:43:25.510793   22390 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 10:43:25.510811   22390 main.go:141] libmachine: Making call to close driver server
I0929 10:43:25.510832   22390 main.go:141] libmachine: (functional-960153) Calling .Close
I0929 10:43:25.511080   22390 main.go:141] libmachine: (functional-960153) DBG | Closing plugin on server side
I0929 10:43:25.511107   22390 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:43:25.511150   22390 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-960153 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                      │ functional-960153  │ 49e0de6ba337e │ 1.47MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ localhost/minikube-local-cache-test     │ functional-960153  │ 90c80b1e95738 │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ localhost/kicbase/echo-server           │ functional-960153  │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-960153 image ls --format table --alsologtostderr:
I0929 10:43:28.311182   22559 out.go:360] Setting OutFile to fd 1 ...
I0929 10:43:28.311437   22559 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:43:28.311446   22559 out.go:374] Setting ErrFile to fd 2...
I0929 10:43:28.311451   22559 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:43:28.311617   22559 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
I0929 10:43:28.312183   22559 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:43:28.312271   22559 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:43:28.312632   22559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:43:28.312669   22559 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:43:28.325589   22559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41955
I0929 10:43:28.326125   22559 main.go:141] libmachine: () Calling .GetVersion
I0929 10:43:28.326652   22559 main.go:141] libmachine: Using API Version  1
I0929 10:43:28.326673   22559 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:43:28.327086   22559 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:43:28.327302   22559 main.go:141] libmachine: (functional-960153) Calling .GetState
I0929 10:43:28.329189   22559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:43:28.329227   22559 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:43:28.342819   22559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33471
I0929 10:43:28.343293   22559 main.go:141] libmachine: () Calling .GetVersion
I0929 10:43:28.343825   22559 main.go:141] libmachine: Using API Version  1
I0929 10:43:28.343844   22559 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:43:28.344221   22559 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:43:28.344435   22559 main.go:141] libmachine: (functional-960153) Calling .DriverName
I0929 10:43:28.344847   22559 ssh_runner.go:195] Run: systemctl --version
I0929 10:43:28.344874   22559 main.go:141] libmachine: (functional-960153) Calling .GetSSHHostname
I0929 10:43:28.348416   22559 main.go:141] libmachine: (functional-960153) DBG | domain functional-960153 has defined MAC address 52:54:00:7e:92:06 in network mk-functional-960153
I0929 10:43:28.348963   22559 main.go:141] libmachine: (functional-960153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:92:06", ip: ""} in network mk-functional-960153: {Iface:virbr1 ExpiryTime:2025-09-29 11:34:45 +0000 UTC Type:0 Mac:52:54:00:7e:92:06 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:functional-960153 Clientid:01:52:54:00:7e:92:06}
I0929 10:43:28.348992   22559 main.go:141] libmachine: (functional-960153) DBG | domain functional-960153 has defined IP address 192.168.39.210 and MAC address 52:54:00:7e:92:06 in network mk-functional-960153
I0929 10:43:28.349200   22559 main.go:141] libmachine: (functional-960153) Calling .GetSSHPort
I0929 10:43:28.349392   22559 main.go:141] libmachine: (functional-960153) Calling .GetSSHKeyPath
I0929 10:43:28.349539   22559 main.go:141] libmachine: (functional-960153) Calling .GetSSHUsername
I0929 10:43:28.349677   22559 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/functional-960153/id_rsa Username:docker}
I0929 10:43:28.430202   22559 ssh_runner.go:195] Run: sudo crictl images --output json
I0929 10:43:28.472286   22559 main.go:141] libmachine: Making call to close driver server
I0929 10:43:28.472301   22559 main.go:141] libmachine: (functional-960153) Calling .Close
I0929 10:43:28.472561   22559 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:43:28.472583   22559 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 10:43:28.472594   22559 main.go:141] libmachine: Making call to close driver server
I0929 10:43:28.472597   22559 main.go:141] libmachine: (functional-960153) DBG | Closing plugin on server side
I0929 10:43:28.472601   22559 main.go:141] libmachine: (functional-960153) Calling .Close
I0929 10:43:28.472844   22559 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:43:28.472896   22559 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 10:43:28.472861   22559 main.go:141] libmachine: (functional-960153) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-960153 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"49e0de6ba337ee99b4f51a183000f156f98d87796251238b25b47854b0dbba0c","repoDigests":["localhost/my-image@sha256:091b5a6e42e224493e1185e327aba113c4cc3241ded227b72a154c4867eb3d58"],"repoTags":["localhost/my-image:functional-960153"],"size":"1468600"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"760
04183"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"c7d098201dda18de93a82d3747d582998bfeba272f257b82c0126b4eebc69357","repoDigests":["docker.io/library/19e3f2f8a1f6337ab865ad9d323f7e2e03de7ee9dbaf401e76611d91ad01781e-tmp@sha256:bb0c4bdd5c8fc842113b275d832c5b0cc3b5bb94f718416a5a52684080462779"],"repoTags":[],"size":"1466018"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:
62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-960153"],"size":"4943877"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71
170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"90c80b1e95738f61d026ab8978b111a512c13c326f0a55dc364ad5efdeb631f6","repoDigests":["localhost/minikube-local-cache-test@sha256:36f60a6d58ee715d5febc78f8ab02f7b19b5e743006e51e1fe81fc768cda3660"],"repoTags":["localhost/minikube-local-cache-test:functional-960153"],"size":"3328"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad
045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"re
poTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-960153 image ls --format json --alsologtostderr:
I0929 10:43:28.103393   22517 out.go:360] Setting OutFile to fd 1 ...
I0929 10:43:28.103640   22517 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:43:28.103650   22517 out.go:374] Setting ErrFile to fd 2...
I0929 10:43:28.103654   22517 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:43:28.103917   22517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
I0929 10:43:28.104546   22517 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:43:28.104652   22517 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:43:28.105052   22517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:43:28.105130   22517 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:43:28.119027   22517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44557
I0929 10:43:28.119675   22517 main.go:141] libmachine: () Calling .GetVersion
I0929 10:43:28.120381   22517 main.go:141] libmachine: Using API Version  1
I0929 10:43:28.120410   22517 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:43:28.120851   22517 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:43:28.121052   22517 main.go:141] libmachine: (functional-960153) Calling .GetState
I0929 10:43:28.123315   22517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:43:28.123371   22517 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:43:28.136630   22517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36355
I0929 10:43:28.137180   22517 main.go:141] libmachine: () Calling .GetVersion
I0929 10:43:28.137701   22517 main.go:141] libmachine: Using API Version  1
I0929 10:43:28.137721   22517 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:43:28.138066   22517 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:43:28.138259   22517 main.go:141] libmachine: (functional-960153) Calling .DriverName
I0929 10:43:28.138508   22517 ssh_runner.go:195] Run: systemctl --version
I0929 10:43:28.138530   22517 main.go:141] libmachine: (functional-960153) Calling .GetSSHHostname
I0929 10:43:28.141497   22517 main.go:141] libmachine: (functional-960153) DBG | domain functional-960153 has defined MAC address 52:54:00:7e:92:06 in network mk-functional-960153
I0929 10:43:28.141964   22517 main.go:141] libmachine: (functional-960153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:92:06", ip: ""} in network mk-functional-960153: {Iface:virbr1 ExpiryTime:2025-09-29 11:34:45 +0000 UTC Type:0 Mac:52:54:00:7e:92:06 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:functional-960153 Clientid:01:52:54:00:7e:92:06}
I0929 10:43:28.141980   22517 main.go:141] libmachine: (functional-960153) DBG | domain functional-960153 has defined IP address 192.168.39.210 and MAC address 52:54:00:7e:92:06 in network mk-functional-960153
I0929 10:43:28.142151   22517 main.go:141] libmachine: (functional-960153) Calling .GetSSHPort
I0929 10:43:28.142335   22517 main.go:141] libmachine: (functional-960153) Calling .GetSSHKeyPath
I0929 10:43:28.142500   22517 main.go:141] libmachine: (functional-960153) Calling .GetSSHUsername
I0929 10:43:28.142695   22517 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/functional-960153/id_rsa Username:docker}
I0929 10:43:28.223481   22517 ssh_runner.go:195] Run: sudo crictl images --output json
I0929 10:43:28.262718   22517 main.go:141] libmachine: Making call to close driver server
I0929 10:43:28.262735   22517 main.go:141] libmachine: (functional-960153) Calling .Close
I0929 10:43:28.263154   22517 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:43:28.263172   22517 main.go:141] libmachine: (functional-960153) DBG | Closing plugin on server side
I0929 10:43:28.263178   22517 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 10:43:28.263202   22517 main.go:141] libmachine: Making call to close driver server
I0929 10:43:28.263213   22517 main.go:141] libmachine: (functional-960153) Calling .Close
I0929 10:43:28.263501   22517 main.go:141] libmachine: (functional-960153) DBG | Closing plugin on server side
I0929 10:43:28.263547   22517 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:43:28.263564   22517 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-960153 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-960153
size: "4943877"
- id: 90c80b1e95738f61d026ab8978b111a512c13c326f0a55dc364ad5efdeb631f6
repoDigests:
- localhost/minikube-local-cache-test@sha256:36f60a6d58ee715d5febc78f8ab02f7b19b5e743006e51e1fe81fc768cda3660
repoTags:
- localhost/minikube-local-cache-test:functional-960153
size: "3328"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-960153 image ls --format yaml --alsologtostderr:
I0929 10:43:25.558579   22414 out.go:360] Setting OutFile to fd 1 ...
I0929 10:43:25.558777   22414 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:43:25.558789   22414 out.go:374] Setting ErrFile to fd 2...
I0929 10:43:25.558793   22414 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:43:25.558985   22414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
I0929 10:43:25.559612   22414 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:43:25.559700   22414 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:43:25.560022   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:43:25.560057   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:43:25.573023   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
I0929 10:43:25.573807   22414 main.go:141] libmachine: () Calling .GetVersion
I0929 10:43:25.574423   22414 main.go:141] libmachine: Using API Version  1
I0929 10:43:25.574452   22414 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:43:25.574784   22414 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:43:25.574974   22414 main.go:141] libmachine: (functional-960153) Calling .GetState
I0929 10:43:25.576680   22414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:43:25.576719   22414 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:43:25.590904   22414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33527
I0929 10:43:25.591292   22414 main.go:141] libmachine: () Calling .GetVersion
I0929 10:43:25.591771   22414 main.go:141] libmachine: Using API Version  1
I0929 10:43:25.591791   22414 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:43:25.592139   22414 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:43:25.592295   22414 main.go:141] libmachine: (functional-960153) Calling .DriverName
I0929 10:43:25.592491   22414 ssh_runner.go:195] Run: systemctl --version
I0929 10:43:25.592512   22414 main.go:141] libmachine: (functional-960153) Calling .GetSSHHostname
I0929 10:43:25.595522   22414 main.go:141] libmachine: (functional-960153) DBG | domain functional-960153 has defined MAC address 52:54:00:7e:92:06 in network mk-functional-960153
I0929 10:43:25.595939   22414 main.go:141] libmachine: (functional-960153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:92:06", ip: ""} in network mk-functional-960153: {Iface:virbr1 ExpiryTime:2025-09-29 11:34:45 +0000 UTC Type:0 Mac:52:54:00:7e:92:06 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:functional-960153 Clientid:01:52:54:00:7e:92:06}
I0929 10:43:25.595963   22414 main.go:141] libmachine: (functional-960153) DBG | domain functional-960153 has defined IP address 192.168.39.210 and MAC address 52:54:00:7e:92:06 in network mk-functional-960153
I0929 10:43:25.596127   22414 main.go:141] libmachine: (functional-960153) Calling .GetSSHPort
I0929 10:43:25.596275   22414 main.go:141] libmachine: (functional-960153) Calling .GetSSHKeyPath
I0929 10:43:25.596431   22414 main.go:141] libmachine: (functional-960153) Calling .GetSSHUsername
I0929 10:43:25.596550   22414 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/functional-960153/id_rsa Username:docker}
I0929 10:43:25.678222   22414 ssh_runner.go:195] Run: sudo crictl images --output json
I0929 10:43:25.719050   22414 main.go:141] libmachine: Making call to close driver server
I0929 10:43:25.719065   22414 main.go:141] libmachine: (functional-960153) Calling .Close
I0929 10:43:25.719324   22414 main.go:141] libmachine: (functional-960153) DBG | Closing plugin on server side
I0929 10:43:25.719340   22414 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:43:25.719359   22414 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 10:43:25.719368   22414 main.go:141] libmachine: Making call to close driver server
I0929 10:43:25.719375   22414 main.go:141] libmachine: (functional-960153) Calling .Close
I0929 10:43:25.719643   22414 main.go:141] libmachine: (functional-960153) DBG | Closing plugin on server side
I0929 10:43:25.719658   22414 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:43:25.719692   22414 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960153 ssh pgrep buildkitd: exit status 1 (190.155401ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 image build -t localhost/my-image:functional-960153 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-960153 image build -t localhost/my-image:functional-960153 testdata/build --alsologtostderr: (1.93175612s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-960153 image build -t localhost/my-image:functional-960153 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c7d098201dd
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-960153
--> 49e0de6ba33
Successfully tagged localhost/my-image:functional-960153
49e0de6ba337ee99b4f51a183000f156f98d87796251238b25b47854b0dbba0c
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-960153 image build -t localhost/my-image:functional-960153 testdata/build --alsologtostderr:
I0929 10:43:25.957342   22468 out.go:360] Setting OutFile to fd 1 ...
I0929 10:43:25.957502   22468 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:43:25.957512   22468 out.go:374] Setting ErrFile to fd 2...
I0929 10:43:25.957519   22468 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:43:25.957743   22468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
I0929 10:43:25.958304   22468 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:43:25.958855   22468 config.go:182] Loaded profile config "functional-960153": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:43:25.959225   22468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:43:25.959256   22468 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:43:25.972329   22468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33763
I0929 10:43:25.972759   22468 main.go:141] libmachine: () Calling .GetVersion
I0929 10:43:25.973252   22468 main.go:141] libmachine: Using API Version  1
I0929 10:43:25.973283   22468 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:43:25.973642   22468 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:43:25.973841   22468 main.go:141] libmachine: (functional-960153) Calling .GetState
I0929 10:43:25.975762   22468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 10:43:25.975803   22468 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 10:43:25.988668   22468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36197
I0929 10:43:25.989138   22468 main.go:141] libmachine: () Calling .GetVersion
I0929 10:43:25.989678   22468 main.go:141] libmachine: Using API Version  1
I0929 10:43:25.989701   22468 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 10:43:25.990127   22468 main.go:141] libmachine: () Calling .GetMachineName
I0929 10:43:25.990315   22468 main.go:141] libmachine: (functional-960153) Calling .DriverName
I0929 10:43:25.990544   22468 ssh_runner.go:195] Run: systemctl --version
I0929 10:43:25.990580   22468 main.go:141] libmachine: (functional-960153) Calling .GetSSHHostname
I0929 10:43:25.993758   22468 main.go:141] libmachine: (functional-960153) DBG | domain functional-960153 has defined MAC address 52:54:00:7e:92:06 in network mk-functional-960153
I0929 10:43:25.994161   22468 main.go:141] libmachine: (functional-960153) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:92:06", ip: ""} in network mk-functional-960153: {Iface:virbr1 ExpiryTime:2025-09-29 11:34:45 +0000 UTC Type:0 Mac:52:54:00:7e:92:06 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:functional-960153 Clientid:01:52:54:00:7e:92:06}
I0929 10:43:25.994203   22468 main.go:141] libmachine: (functional-960153) DBG | domain functional-960153 has defined IP address 192.168.39.210 and MAC address 52:54:00:7e:92:06 in network mk-functional-960153
I0929 10:43:25.994299   22468 main.go:141] libmachine: (functional-960153) Calling .GetSSHPort
I0929 10:43:25.994480   22468 main.go:141] libmachine: (functional-960153) Calling .GetSSHKeyPath
I0929 10:43:25.994650   22468 main.go:141] libmachine: (functional-960153) Calling .GetSSHUsername
I0929 10:43:25.994784   22468 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/functional-960153/id_rsa Username:docker}
I0929 10:43:26.076492   22468 build_images.go:161] Building image from path: /tmp/build.3835141981.tar
I0929 10:43:26.076557   22468 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0929 10:43:26.089449   22468 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3835141981.tar
I0929 10:43:26.095721   22468 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3835141981.tar: stat -c "%s %y" /var/lib/minikube/build/build.3835141981.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3835141981.tar': No such file or directory
I0929 10:43:26.095759   22468 ssh_runner.go:362] scp /tmp/build.3835141981.tar --> /var/lib/minikube/build/build.3835141981.tar (3072 bytes)
I0929 10:43:26.126883   22468 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3835141981
I0929 10:43:26.139335   22468 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3835141981 -xf /var/lib/minikube/build/build.3835141981.tar
I0929 10:43:26.151495   22468 crio.go:315] Building image: /var/lib/minikube/build/build.3835141981
I0929 10:43:26.151579   22468 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-960153 /var/lib/minikube/build/build.3835141981 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0929 10:43:27.812951   22468 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-960153 /var/lib/minikube/build/build.3835141981 --cgroup-manager=cgroupfs: (1.661343506s)
I0929 10:43:27.813035   22468 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3835141981
I0929 10:43:27.827870   22468 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3835141981.tar
I0929 10:43:27.840971   22468 build_images.go:217] Built localhost/my-image:functional-960153 from /tmp/build.3835141981.tar
I0929 10:43:27.841009   22468 build_images.go:133] succeeded building to: functional-960153
I0929 10:43:27.841014   22468 build_images.go:134] failed building to: 
I0929 10:43:27.841041   22468 main.go:141] libmachine: Making call to close driver server
I0929 10:43:27.841058   22468 main.go:141] libmachine: (functional-960153) Calling .Close
I0929 10:43:27.841286   22468 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:43:27.841297   22468 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 10:43:27.841316   22468 main.go:141] libmachine: Making call to close driver server
I0929 10:43:27.841324   22468 main.go:141] libmachine: (functional-960153) Calling .Close
I0929 10:43:27.841555   22468 main.go:141] libmachine: (functional-960153) DBG | Closing plugin on server side
I0929 10:43:27.841597   22468 main.go:141] libmachine: Successfully made call to close driver server
I0929 10:43:27.841614   22468 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-960153
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (35.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-960153 /tmp/TestFunctionalparallelMountCmdany-port1318152132/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759142235875735035" to /tmp/TestFunctionalparallelMountCmdany-port1318152132/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759142235875735035" to /tmp/TestFunctionalparallelMountCmdany-port1318152132/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759142235875735035" to /tmp/TestFunctionalparallelMountCmdany-port1318152132/001/test-1759142235875735035
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960153 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (225.621653ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 10:37:16.101714    7691 retry.go:31] will retry after 498.93051ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 29 10:37 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 29 10:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 29 10:37 test-1759142235875735035
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh cat /mount-9p/test-1759142235875735035
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-960153 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [818a1168-13eb-40e5-a11e-ed073c8ca85f] Pending
helpers_test.go:352: "busybox-mount" [818a1168-13eb-40e5-a11e-ed073c8ca85f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [818a1168-13eb-40e5-a11e-ed073c8ca85f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [818a1168-13eb-40e5-a11e-ed073c8ca85f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 33.003845567s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-960153 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-960153 /tmp/TestFunctionalparallelMountCmdany-port1318152132/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (35.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "319.643729ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "53.613787ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "278.720544ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "46.829506ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 image load --daemon kicbase/echo-server:functional-960153 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-960153 image load --daemon kicbase/echo-server:functional-960153 --alsologtostderr: (1.215027088s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 image load --daemon kicbase/echo-server:functional-960153 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-960153
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 image load --daemon kicbase/echo-server:functional-960153 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 image save kicbase/echo-server:functional-960153 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 image rm kicbase/echo-server:functional-960153 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-960153
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 image save --daemon kicbase/echo-server:functional-960153 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-960153
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-960153 /tmp/TestFunctionalparallelMountCmdspecific-port2145470094/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960153 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (196.652637ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 10:37:51.479652    7691 retry.go:31] will retry after 600.869113ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-960153 /tmp/TestFunctionalparallelMountCmdspecific-port2145470094/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960153 ssh "sudo umount -f /mount-9p": exit status 1 (190.890585ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-960153 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-960153 /tmp/TestFunctionalparallelMountCmdspecific-port2145470094/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-960153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3951226267/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-960153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3951226267/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-960153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3951226267/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-960153 ssh "findmnt -T" /mount1: exit status 1 (210.835552ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 10:37:53.262907    7691 retry.go:31] will retry after 363.844606ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-960153 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-960153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3951226267/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-960153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3951226267/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-960153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3951226267/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-960153 service list: (1.224654175s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-960153 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-960153 service list -o json: (1.246592302s)
functional_test.go:1504: Took "1.246694087s" to run "out/minikube-linux-amd64 -p functional-960153 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.25s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-960153
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-960153
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-960153
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (202.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 10:53:48.107417    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-232439 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m22.255822878s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (202.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-232439 kubectl -- rollout status deployment/busybox: (2.762144583s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 kubectl -- exec busybox-7b57f96db7-54v89 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 kubectl -- exec busybox-7b57f96db7-648xz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 kubectl -- exec busybox-7b57f96db7-rk4wr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 kubectl -- exec busybox-7b57f96db7-54v89 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 kubectl -- exec busybox-7b57f96db7-648xz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 kubectl -- exec busybox-7b57f96db7-rk4wr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 kubectl -- exec busybox-7b57f96db7-54v89 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 kubectl -- exec busybox-7b57f96db7-648xz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 kubectl -- exec busybox-7b57f96db7-rk4wr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 kubectl -- exec busybox-7b57f96db7-54v89 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 kubectl -- exec busybox-7b57f96db7-54v89 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 kubectl -- exec busybox-7b57f96db7-648xz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 kubectl -- exec busybox-7b57f96db7-648xz -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 kubectl -- exec busybox-7b57f96db7-rk4wr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 kubectl -- exec busybox-7b57f96db7-rk4wr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (46.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 node add --alsologtostderr -v 5
E0929 10:57:14.613638    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:14.620075    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:14.631448    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:14.652907    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:14.694373    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:14.775836    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:14.937331    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:15.259228    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:15.901581    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:17.183633    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-232439 node add --alsologtostderr -v 5: (45.619704571s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (46.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-232439 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0929 10:57:19.745760    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 cp testdata/cp-test.txt ha-232439:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 cp ha-232439:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2501208869/001/cp-test_ha-232439.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 cp ha-232439:/home/docker/cp-test.txt ha-232439-m02:/home/docker/cp-test_ha-232439_ha-232439-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m02 "sudo cat /home/docker/cp-test_ha-232439_ha-232439-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 cp ha-232439:/home/docker/cp-test.txt ha-232439-m03:/home/docker/cp-test_ha-232439_ha-232439-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m03 "sudo cat /home/docker/cp-test_ha-232439_ha-232439-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 cp ha-232439:/home/docker/cp-test.txt ha-232439-m04:/home/docker/cp-test_ha-232439_ha-232439-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m04 "sudo cat /home/docker/cp-test_ha-232439_ha-232439-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 cp testdata/cp-test.txt ha-232439-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 cp ha-232439-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2501208869/001/cp-test_ha-232439-m02.txt
E0929 10:57:24.867583    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m02 "sudo cat /home/docker/cp-test.txt"
E0929 10:57:25.042315    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 cp ha-232439-m02:/home/docker/cp-test.txt ha-232439:/home/docker/cp-test_ha-232439-m02_ha-232439.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439 "sudo cat /home/docker/cp-test_ha-232439-m02_ha-232439.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 cp ha-232439-m02:/home/docker/cp-test.txt ha-232439-m03:/home/docker/cp-test_ha-232439-m02_ha-232439-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m03 "sudo cat /home/docker/cp-test_ha-232439-m02_ha-232439-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 cp ha-232439-m02:/home/docker/cp-test.txt ha-232439-m04:/home/docker/cp-test_ha-232439-m02_ha-232439-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m04 "sudo cat /home/docker/cp-test_ha-232439-m02_ha-232439-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 cp testdata/cp-test.txt ha-232439-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 cp ha-232439-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2501208869/001/cp-test_ha-232439-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 cp ha-232439-m03:/home/docker/cp-test.txt ha-232439:/home/docker/cp-test_ha-232439-m03_ha-232439.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439 "sudo cat /home/docker/cp-test_ha-232439-m03_ha-232439.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 cp ha-232439-m03:/home/docker/cp-test.txt ha-232439-m02:/home/docker/cp-test_ha-232439-m03_ha-232439-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m02 "sudo cat /home/docker/cp-test_ha-232439-m03_ha-232439-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 cp ha-232439-m03:/home/docker/cp-test.txt ha-232439-m04:/home/docker/cp-test_ha-232439-m03_ha-232439-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m04 "sudo cat /home/docker/cp-test_ha-232439-m03_ha-232439-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 cp testdata/cp-test.txt ha-232439-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 cp ha-232439-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2501208869/001/cp-test_ha-232439-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 cp ha-232439-m04:/home/docker/cp-test.txt ha-232439:/home/docker/cp-test_ha-232439-m04_ha-232439.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439 "sudo cat /home/docker/cp-test_ha-232439-m04_ha-232439.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 cp ha-232439-m04:/home/docker/cp-test.txt ha-232439-m02:/home/docker/cp-test_ha-232439-m04_ha-232439-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m02 "sudo cat /home/docker/cp-test_ha-232439-m04_ha-232439-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 cp ha-232439-m04:/home/docker/cp-test.txt ha-232439-m03:/home/docker/cp-test_ha-232439-m04_ha-232439-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 ssh -n ha-232439-m03 "sudo cat /home/docker/cp-test_ha-232439-m04_ha-232439-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (84.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 node stop m02 --alsologtostderr -v 5
E0929 10:57:35.109921    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:55.591273    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:58:36.553757    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-232439 node stop m02 --alsologtostderr -v 5: (1m23.512361748s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-232439 status --alsologtostderr -v 5: exit status 7 (687.395458ms)

                                                
                                                
-- stdout --
	ha-232439
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-232439-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-232439-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-232439-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 10:58:56.882458   29627 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:58:56.882592   29627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:58:56.882603   29627 out.go:374] Setting ErrFile to fd 2...
	I0929 10:58:56.882609   29627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:58:56.882781   29627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
	I0929 10:58:56.882945   29627 out.go:368] Setting JSON to false
	I0929 10:58:56.882975   29627 mustload.go:65] Loading cluster: ha-232439
	I0929 10:58:56.883039   29627 notify.go:220] Checking for updates...
	I0929 10:58:56.883302   29627 config.go:182] Loaded profile config "ha-232439": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:58:56.883317   29627 status.go:174] checking status of ha-232439 ...
	I0929 10:58:56.883715   29627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:58:56.883750   29627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:58:56.909120   29627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33923
	I0929 10:58:56.909694   29627 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:58:56.910511   29627 main.go:141] libmachine: Using API Version  1
	I0929 10:58:56.910556   29627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:58:56.910913   29627 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:58:56.911089   29627 main.go:141] libmachine: (ha-232439) Calling .GetState
	I0929 10:58:56.913224   29627 status.go:371] ha-232439 host status = "Running" (err=<nil>)
	I0929 10:58:56.913239   29627 host.go:66] Checking if "ha-232439" exists ...
	I0929 10:58:56.913565   29627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:58:56.913605   29627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:58:56.927832   29627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34371
	I0929 10:58:56.928407   29627 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:58:56.928913   29627 main.go:141] libmachine: Using API Version  1
	I0929 10:58:56.928931   29627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:58:56.929289   29627 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:58:56.929494   29627 main.go:141] libmachine: (ha-232439) Calling .GetIP
	I0929 10:58:56.932738   29627 main.go:141] libmachine: (ha-232439) DBG | domain ha-232439 has defined MAC address 52:54:00:86:01:3d in network mk-ha-232439
	I0929 10:58:56.933279   29627 main.go:141] libmachine: (ha-232439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:01:3d", ip: ""} in network mk-ha-232439: {Iface:virbr1 ExpiryTime:2025-09-29 11:53:19 +0000 UTC Type:0 Mac:52:54:00:86:01:3d Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-232439 Clientid:01:52:54:00:86:01:3d}
	I0929 10:58:56.933321   29627 main.go:141] libmachine: (ha-232439) DBG | domain ha-232439 has defined IP address 192.168.39.150 and MAC address 52:54:00:86:01:3d in network mk-ha-232439
	I0929 10:58:56.933490   29627 host.go:66] Checking if "ha-232439" exists ...
	I0929 10:58:56.933779   29627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:58:56.933820   29627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:58:56.947125   29627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40929
	I0929 10:58:56.947547   29627 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:58:56.947992   29627 main.go:141] libmachine: Using API Version  1
	I0929 10:58:56.948011   29627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:58:56.948405   29627 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:58:56.948623   29627 main.go:141] libmachine: (ha-232439) Calling .DriverName
	I0929 10:58:56.948874   29627 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 10:58:56.948918   29627 main.go:141] libmachine: (ha-232439) Calling .GetSSHHostname
	I0929 10:58:56.952215   29627 main.go:141] libmachine: (ha-232439) DBG | domain ha-232439 has defined MAC address 52:54:00:86:01:3d in network mk-ha-232439
	I0929 10:58:56.952793   29627 main.go:141] libmachine: (ha-232439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:01:3d", ip: ""} in network mk-ha-232439: {Iface:virbr1 ExpiryTime:2025-09-29 11:53:19 +0000 UTC Type:0 Mac:52:54:00:86:01:3d Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-232439 Clientid:01:52:54:00:86:01:3d}
	I0929 10:58:56.952835   29627 main.go:141] libmachine: (ha-232439) DBG | domain ha-232439 has defined IP address 192.168.39.150 and MAC address 52:54:00:86:01:3d in network mk-ha-232439
	I0929 10:58:56.952870   29627 main.go:141] libmachine: (ha-232439) Calling .GetSSHPort
	I0929 10:58:56.953028   29627 main.go:141] libmachine: (ha-232439) Calling .GetSSHKeyPath
	I0929 10:58:56.953169   29627 main.go:141] libmachine: (ha-232439) Calling .GetSSHUsername
	I0929 10:58:56.953314   29627 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/ha-232439/id_rsa Username:docker}
	I0929 10:58:57.062293   29627 ssh_runner.go:195] Run: systemctl --version
	I0929 10:58:57.070244   29627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:58:57.091148   29627 kubeconfig.go:125] found "ha-232439" server: "https://192.168.39.254:8443"
	I0929 10:58:57.091190   29627 api_server.go:166] Checking apiserver status ...
	I0929 10:58:57.091232   29627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:58:57.113461   29627 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1361/cgroup
	W0929 10:58:57.127231   29627 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1361/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 10:58:57.127292   29627 ssh_runner.go:195] Run: ls
	I0929 10:58:57.133017   29627 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0929 10:58:57.138786   29627 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0929 10:58:57.138807   29627 status.go:463] ha-232439 apiserver status = Running (err=<nil>)
	I0929 10:58:57.138816   29627 status.go:176] ha-232439 status: &{Name:ha-232439 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 10:58:57.138836   29627 status.go:174] checking status of ha-232439-m02 ...
	I0929 10:58:57.139108   29627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:58:57.139141   29627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:58:57.152279   29627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42855
	I0929 10:58:57.152749   29627 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:58:57.153145   29627 main.go:141] libmachine: Using API Version  1
	I0929 10:58:57.153164   29627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:58:57.153477   29627 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:58:57.153679   29627 main.go:141] libmachine: (ha-232439-m02) Calling .GetState
	I0929 10:58:57.155257   29627 status.go:371] ha-232439-m02 host status = "Stopped" (err=<nil>)
	I0929 10:58:57.155273   29627 status.go:384] host is not running, skipping remaining checks
	I0929 10:58:57.155280   29627 status.go:176] ha-232439-m02 status: &{Name:ha-232439-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 10:58:57.155299   29627 status.go:174] checking status of ha-232439-m03 ...
	I0929 10:58:57.155623   29627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:58:57.155659   29627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:58:57.168133   29627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41933
	I0929 10:58:57.168640   29627 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:58:57.169121   29627 main.go:141] libmachine: Using API Version  1
	I0929 10:58:57.169142   29627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:58:57.169528   29627 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:58:57.169701   29627 main.go:141] libmachine: (ha-232439-m03) Calling .GetState
	I0929 10:58:57.171311   29627 status.go:371] ha-232439-m03 host status = "Running" (err=<nil>)
	I0929 10:58:57.171324   29627 host.go:66] Checking if "ha-232439-m03" exists ...
	I0929 10:58:57.171606   29627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:58:57.171638   29627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:58:57.185400   29627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34589
	I0929 10:58:57.185918   29627 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:58:57.186433   29627 main.go:141] libmachine: Using API Version  1
	I0929 10:58:57.186462   29627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:58:57.186822   29627 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:58:57.186998   29627 main.go:141] libmachine: (ha-232439-m03) Calling .GetIP
	I0929 10:58:57.190089   29627 main.go:141] libmachine: (ha-232439-m03) DBG | domain ha-232439-m03 has defined MAC address 52:54:00:23:22:10 in network mk-ha-232439
	I0929 10:58:57.190567   29627 main.go:141] libmachine: (ha-232439-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:22:10", ip: ""} in network mk-ha-232439: {Iface:virbr1 ExpiryTime:2025-09-29 11:55:25 +0000 UTC Type:0 Mac:52:54:00:23:22:10 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:ha-232439-m03 Clientid:01:52:54:00:23:22:10}
	I0929 10:58:57.190602   29627 main.go:141] libmachine: (ha-232439-m03) DBG | domain ha-232439-m03 has defined IP address 192.168.39.129 and MAC address 52:54:00:23:22:10 in network mk-ha-232439
	I0929 10:58:57.190780   29627 host.go:66] Checking if "ha-232439-m03" exists ...
	I0929 10:58:57.191155   29627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:58:57.191189   29627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:58:57.204115   29627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43461
	I0929 10:58:57.204563   29627 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:58:57.204983   29627 main.go:141] libmachine: Using API Version  1
	I0929 10:58:57.205007   29627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:58:57.205293   29627 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:58:57.205501   29627 main.go:141] libmachine: (ha-232439-m03) Calling .DriverName
	I0929 10:58:57.205644   29627 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 10:58:57.205661   29627 main.go:141] libmachine: (ha-232439-m03) Calling .GetSSHHostname
	I0929 10:58:57.208657   29627 main.go:141] libmachine: (ha-232439-m03) DBG | domain ha-232439-m03 has defined MAC address 52:54:00:23:22:10 in network mk-ha-232439
	I0929 10:58:57.209198   29627 main.go:141] libmachine: (ha-232439-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:22:10", ip: ""} in network mk-ha-232439: {Iface:virbr1 ExpiryTime:2025-09-29 11:55:25 +0000 UTC Type:0 Mac:52:54:00:23:22:10 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:ha-232439-m03 Clientid:01:52:54:00:23:22:10}
	I0929 10:58:57.209224   29627 main.go:141] libmachine: (ha-232439-m03) DBG | domain ha-232439-m03 has defined IP address 192.168.39.129 and MAC address 52:54:00:23:22:10 in network mk-ha-232439
	I0929 10:58:57.209438   29627 main.go:141] libmachine: (ha-232439-m03) Calling .GetSSHPort
	I0929 10:58:57.209585   29627 main.go:141] libmachine: (ha-232439-m03) Calling .GetSSHKeyPath
	I0929 10:58:57.209707   29627 main.go:141] libmachine: (ha-232439-m03) Calling .GetSSHUsername
	I0929 10:58:57.209793   29627 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/ha-232439-m03/id_rsa Username:docker}
	I0929 10:58:57.292222   29627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:58:57.312632   29627 kubeconfig.go:125] found "ha-232439" server: "https://192.168.39.254:8443"
	I0929 10:58:57.312656   29627 api_server.go:166] Checking apiserver status ...
	I0929 10:58:57.312687   29627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:58:57.336954   29627 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1743/cgroup
	W0929 10:58:57.350226   29627 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1743/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 10:58:57.350288   29627 ssh_runner.go:195] Run: ls
	I0929 10:58:57.355702   29627 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0929 10:58:57.362925   29627 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0929 10:58:57.362945   29627 status.go:463] ha-232439-m03 apiserver status = Running (err=<nil>)
	I0929 10:58:57.362953   29627 status.go:176] ha-232439-m03 status: &{Name:ha-232439-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 10:58:57.362967   29627 status.go:174] checking status of ha-232439-m04 ...
	I0929 10:58:57.363289   29627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:58:57.363330   29627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:58:57.376646   29627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39157
	I0929 10:58:57.377088   29627 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:58:57.377530   29627 main.go:141] libmachine: Using API Version  1
	I0929 10:58:57.377555   29627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:58:57.377882   29627 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:58:57.378052   29627 main.go:141] libmachine: (ha-232439-m04) Calling .GetState
	I0929 10:58:57.379848   29627 status.go:371] ha-232439-m04 host status = "Running" (err=<nil>)
	I0929 10:58:57.379882   29627 host.go:66] Checking if "ha-232439-m04" exists ...
	I0929 10:58:57.380165   29627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:58:57.380209   29627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:58:57.393548   29627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37949
	I0929 10:58:57.393969   29627 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:58:57.394388   29627 main.go:141] libmachine: Using API Version  1
	I0929 10:58:57.394408   29627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:58:57.394755   29627 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:58:57.394923   29627 main.go:141] libmachine: (ha-232439-m04) Calling .GetIP
	I0929 10:58:57.397938   29627 main.go:141] libmachine: (ha-232439-m04) DBG | domain ha-232439-m04 has defined MAC address 52:54:00:5e:a1:5f in network mk-ha-232439
	I0929 10:58:57.398495   29627 main.go:141] libmachine: (ha-232439-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:a1:5f", ip: ""} in network mk-ha-232439: {Iface:virbr1 ExpiryTime:2025-09-29 11:56:49 +0000 UTC Type:0 Mac:52:54:00:5e:a1:5f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-232439-m04 Clientid:01:52:54:00:5e:a1:5f}
	I0929 10:58:57.398515   29627 main.go:141] libmachine: (ha-232439-m04) DBG | domain ha-232439-m04 has defined IP address 192.168.39.224 and MAC address 52:54:00:5e:a1:5f in network mk-ha-232439
	I0929 10:58:57.398671   29627 host.go:66] Checking if "ha-232439-m04" exists ...
	I0929 10:58:57.398999   29627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 10:58:57.399038   29627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 10:58:57.412040   29627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37759
	I0929 10:58:57.412481   29627 main.go:141] libmachine: () Calling .GetVersion
	I0929 10:58:57.412933   29627 main.go:141] libmachine: Using API Version  1
	I0929 10:58:57.412963   29627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 10:58:57.413298   29627 main.go:141] libmachine: () Calling .GetMachineName
	I0929 10:58:57.413489   29627 main.go:141] libmachine: (ha-232439-m04) Calling .DriverName
	I0929 10:58:57.413677   29627 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 10:58:57.413700   29627 main.go:141] libmachine: (ha-232439-m04) Calling .GetSSHHostname
	I0929 10:58:57.416726   29627 main.go:141] libmachine: (ha-232439-m04) DBG | domain ha-232439-m04 has defined MAC address 52:54:00:5e:a1:5f in network mk-ha-232439
	I0929 10:58:57.417191   29627 main.go:141] libmachine: (ha-232439-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:a1:5f", ip: ""} in network mk-ha-232439: {Iface:virbr1 ExpiryTime:2025-09-29 11:56:49 +0000 UTC Type:0 Mac:52:54:00:5e:a1:5f Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-232439-m04 Clientid:01:52:54:00:5e:a1:5f}
	I0929 10:58:57.417224   29627 main.go:141] libmachine: (ha-232439-m04) DBG | domain ha-232439-m04 has defined IP address 192.168.39.224 and MAC address 52:54:00:5e:a1:5f in network mk-ha-232439
	I0929 10:58:57.417421   29627 main.go:141] libmachine: (ha-232439-m04) Calling .GetSSHPort
	I0929 10:58:57.417572   29627 main.go:141] libmachine: (ha-232439-m04) Calling .GetSSHKeyPath
	I0929 10:58:57.417690   29627 main.go:141] libmachine: (ha-232439-m04) Calling .GetSSHUsername
	I0929 10:58:57.417809   29627 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/ha-232439-m04/id_rsa Username:docker}
	I0929 10:58:57.504104   29627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:58:57.523595   29627 status.go:176] ha-232439-m04 status: &{Name:ha-232439-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (84.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-232439 node start m02 --alsologtostderr -v 5: (36.278328883s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (383.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 stop --alsologtostderr -v 5
E0929 10:59:58.476182    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:02:14.615902    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:02:25.042049    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:02:42.318689    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-232439 stop --alsologtostderr -v 5: (4m19.110051326s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-232439 start --wait true --alsologtostderr -v 5: (2m3.81327054s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (383.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-232439 node delete m03 --alsologtostderr -v 5: (17.619168323s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (251.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 stop --alsologtostderr -v 5
E0929 11:07:14.617061    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:07:25.042372    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:10:28.111173    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-232439 stop --alsologtostderr -v 5: (4m10.916668072s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-232439 status --alsologtostderr -v 5: exit status 7 (95.200271ms)

                                                
                                                
-- stdout --
	ha-232439
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-232439-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-232439-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:10:29.464423   33438 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:10:29.464665   33438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:10:29.464673   33438 out.go:374] Setting ErrFile to fd 2...
	I0929 11:10:29.464677   33438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:10:29.464864   33438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
	I0929 11:10:29.465054   33438 out.go:368] Setting JSON to false
	I0929 11:10:29.465082   33438 mustload.go:65] Loading cluster: ha-232439
	I0929 11:10:29.465138   33438 notify.go:220] Checking for updates...
	I0929 11:10:29.465430   33438 config.go:182] Loaded profile config "ha-232439": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:10:29.465445   33438 status.go:174] checking status of ha-232439 ...
	I0929 11:10:29.465834   33438 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:10:29.465869   33438 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:10:29.478978   33438 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0929 11:10:29.479459   33438 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:10:29.480025   33438 main.go:141] libmachine: Using API Version  1
	I0929 11:10:29.480058   33438 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:10:29.480509   33438 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:10:29.480703   33438 main.go:141] libmachine: (ha-232439) Calling .GetState
	I0929 11:10:29.482404   33438 status.go:371] ha-232439 host status = "Stopped" (err=<nil>)
	I0929 11:10:29.482420   33438 status.go:384] host is not running, skipping remaining checks
	I0929 11:10:29.482427   33438 status.go:176] ha-232439 status: &{Name:ha-232439 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:10:29.482455   33438 status.go:174] checking status of ha-232439-m02 ...
	I0929 11:10:29.482801   33438 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:10:29.482847   33438 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:10:29.495717   33438 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37229
	I0929 11:10:29.496108   33438 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:10:29.496476   33438 main.go:141] libmachine: Using API Version  1
	I0929 11:10:29.496523   33438 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:10:29.496858   33438 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:10:29.497064   33438 main.go:141] libmachine: (ha-232439-m02) Calling .GetState
	I0929 11:10:29.498704   33438 status.go:371] ha-232439-m02 host status = "Stopped" (err=<nil>)
	I0929 11:10:29.498722   33438 status.go:384] host is not running, skipping remaining checks
	I0929 11:10:29.498729   33438 status.go:176] ha-232439-m02 status: &{Name:ha-232439-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:10:29.498748   33438 status.go:174] checking status of ha-232439-m04 ...
	I0929 11:10:29.499039   33438 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:10:29.499073   33438 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:10:29.511845   33438 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43697
	I0929 11:10:29.512217   33438 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:10:29.512597   33438 main.go:141] libmachine: Using API Version  1
	I0929 11:10:29.512618   33438 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:10:29.513031   33438 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:10:29.513218   33438 main.go:141] libmachine: (ha-232439-m04) Calling .GetState
	I0929 11:10:29.515156   33438 status.go:371] ha-232439-m04 host status = "Stopped" (err=<nil>)
	I0929 11:10:29.515170   33438 status.go:384] host is not running, skipping remaining checks
	I0929 11:10:29.515177   33438 status.go:176] ha-232439-m04 status: &{Name:ha-232439-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (251.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (120.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 11:12:14.613473    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:12:25.042179    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-232439 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m59.306614654s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (120.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 node add --control-plane --alsologtostderr -v 5
E0929 11:13:37.681082    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-232439 node add --control-plane --alsologtostderr -v 5: (1m23.801463022s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-232439 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.93s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-662395 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-662395 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (53.932637423s)
--- PASS: TestJSONOutput/start/Command (53.93s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-662395 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-662395 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-662395 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-662395 --output=json --user=testUser: (6.948417565s)
--- PASS: TestJSONOutput/stop/Command (6.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-506161 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-506161 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (60.639472ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1d07c373-45de-46ad-b91c-cd05f1cdb77b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-506161] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"39b6fcac-d289-4909-bda1-cdccec458cc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21657"}}
	{"specversion":"1.0","id":"38f061b6-d210-40dd-b09a-08e22fa2328e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ddba45b0-029f-4d4a-9c71-9af650da592e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig"}}
	{"specversion":"1.0","id":"4390a6b6-1704-4c2f-89d2-53a352b4f5cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube"}}
	{"specversion":"1.0","id":"4a92951d-8934-45af-ad59-cf9254ed5a16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0bb77cbe-755f-425d-955a-60529885eb1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"74c89d2b-3c7e-43af-9637-4b1a4764550d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-506161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-506161
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (81.41s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-931055 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-931055 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.859835765s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-944335 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-944335 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (38.817788083s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-931055
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-944335
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-944335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-944335
helpers_test.go:175: Cleaning up "first-931055" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-931055
--- PASS: TestMinikubeProfile (81.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-837307 --memory=3072 --mount-string /tmp/TestMountStartserial3610358215/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-837307 --memory=3072 --mount-string /tmp/TestMountStartserial3610358215/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (19.860326391s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-837307 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-837307 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (21.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-852133 --memory=3072 --mount-string /tmp/TestMountStartserial3610358215/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-852133 --memory=3072 --mount-string /tmp/TestMountStartserial3610358215/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (20.452277458s)
--- PASS: TestMountStart/serial/StartWithMountSecond (21.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-852133 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-852133 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-837307 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-852133 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-852133 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-852133
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-852133: (1.36878944s)
--- PASS: TestMountStart/serial/Stop (1.37s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-852133
E0929 11:17:14.617666    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:17:25.042471    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-852133: (18.996880574s)
--- PASS: TestMountStart/serial/RestartStopped (20.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-852133 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-852133 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (103.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-007768 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-007768 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m43.098368495s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (103.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007768 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007768 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-007768 -- rollout status deployment/busybox: (2.805964822s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007768 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007768 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007768 -- exec busybox-7b57f96db7-9lgcr -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007768 -- exec busybox-7b57f96db7-fgbkr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007768 -- exec busybox-7b57f96db7-9lgcr -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007768 -- exec busybox-7b57f96db7-fgbkr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007768 -- exec busybox-7b57f96db7-9lgcr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007768 -- exec busybox-7b57f96db7-fgbkr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.32s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007768 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007768 -- exec busybox-7b57f96db7-9lgcr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007768 -- exec busybox-7b57f96db7-9lgcr -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007768 -- exec busybox-7b57f96db7-fgbkr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007768 -- exec busybox-7b57f96db7-fgbkr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-007768 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-007768 -v=5 --alsologtostderr: (42.38904664s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.96s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-007768 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 cp testdata/cp-test.txt multinode-007768:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 ssh -n multinode-007768 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 cp multinode-007768:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile777632050/001/cp-test_multinode-007768.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 ssh -n multinode-007768 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 cp multinode-007768:/home/docker/cp-test.txt multinode-007768-m02:/home/docker/cp-test_multinode-007768_multinode-007768-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 ssh -n multinode-007768 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 ssh -n multinode-007768-m02 "sudo cat /home/docker/cp-test_multinode-007768_multinode-007768-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 cp multinode-007768:/home/docker/cp-test.txt multinode-007768-m03:/home/docker/cp-test_multinode-007768_multinode-007768-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 ssh -n multinode-007768 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 ssh -n multinode-007768-m03 "sudo cat /home/docker/cp-test_multinode-007768_multinode-007768-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 cp testdata/cp-test.txt multinode-007768-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 ssh -n multinode-007768-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 cp multinode-007768-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile777632050/001/cp-test_multinode-007768-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 ssh -n multinode-007768-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 cp multinode-007768-m02:/home/docker/cp-test.txt multinode-007768:/home/docker/cp-test_multinode-007768-m02_multinode-007768.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 ssh -n multinode-007768-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 ssh -n multinode-007768 "sudo cat /home/docker/cp-test_multinode-007768-m02_multinode-007768.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 cp multinode-007768-m02:/home/docker/cp-test.txt multinode-007768-m03:/home/docker/cp-test_multinode-007768-m02_multinode-007768-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 ssh -n multinode-007768-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 ssh -n multinode-007768-m03 "sudo cat /home/docker/cp-test_multinode-007768-m02_multinode-007768-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 cp testdata/cp-test.txt multinode-007768-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 ssh -n multinode-007768-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 cp multinode-007768-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile777632050/001/cp-test_multinode-007768-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 ssh -n multinode-007768-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 cp multinode-007768-m03:/home/docker/cp-test.txt multinode-007768:/home/docker/cp-test_multinode-007768-m03_multinode-007768.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 ssh -n multinode-007768-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 ssh -n multinode-007768 "sudo cat /home/docker/cp-test_multinode-007768-m03_multinode-007768.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 cp multinode-007768-m03:/home/docker/cp-test.txt multinode-007768-m02:/home/docker/cp-test_multinode-007768-m03_multinode-007768-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 ssh -n multinode-007768-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 ssh -n multinode-007768-m02 "sudo cat /home/docker/cp-test_multinode-007768-m03_multinode-007768-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-007768 node stop m03: (1.611290828s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-007768 status: exit status 7 (419.930057ms)

                                                
                                                
-- stdout --
	multinode-007768
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-007768-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-007768-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-007768 status --alsologtostderr: exit status 7 (423.687491ms)

                                                
                                                
-- stdout --
	multinode-007768
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-007768-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-007768-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:20:11.774096   41475 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:20:11.774370   41475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:20:11.774380   41475 out.go:374] Setting ErrFile to fd 2...
	I0929 11:20:11.774384   41475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:20:11.774666   41475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
	I0929 11:20:11.774905   41475 out.go:368] Setting JSON to false
	I0929 11:20:11.774940   41475 mustload.go:65] Loading cluster: multinode-007768
	I0929 11:20:11.775113   41475 notify.go:220] Checking for updates...
	I0929 11:20:11.775479   41475 config.go:182] Loaded profile config "multinode-007768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:20:11.775500   41475 status.go:174] checking status of multinode-007768 ...
	I0929 11:20:11.776008   41475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:20:11.776112   41475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:20:11.794225   41475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34801
	I0929 11:20:11.794793   41475 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:20:11.795440   41475 main.go:141] libmachine: Using API Version  1
	I0929 11:20:11.795466   41475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:20:11.795834   41475 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:20:11.796030   41475 main.go:141] libmachine: (multinode-007768) Calling .GetState
	I0929 11:20:11.797843   41475 status.go:371] multinode-007768 host status = "Running" (err=<nil>)
	I0929 11:20:11.797860   41475 host.go:66] Checking if "multinode-007768" exists ...
	I0929 11:20:11.798157   41475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:20:11.798194   41475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:20:11.811635   41475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43705
	I0929 11:20:11.812047   41475 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:20:11.812449   41475 main.go:141] libmachine: Using API Version  1
	I0929 11:20:11.812469   41475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:20:11.812810   41475 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:20:11.812998   41475 main.go:141] libmachine: (multinode-007768) Calling .GetIP
	I0929 11:20:11.816086   41475 main.go:141] libmachine: (multinode-007768) DBG | domain multinode-007768 has defined MAC address 52:54:00:7d:14:2c in network mk-multinode-007768
	I0929 11:20:11.816562   41475 main.go:141] libmachine: (multinode-007768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:14:2c", ip: ""} in network mk-multinode-007768: {Iface:virbr1 ExpiryTime:2025-09-29 12:17:46 +0000 UTC Type:0 Mac:52:54:00:7d:14:2c Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-007768 Clientid:01:52:54:00:7d:14:2c}
	I0929 11:20:11.816580   41475 main.go:141] libmachine: (multinode-007768) DBG | domain multinode-007768 has defined IP address 192.168.39.185 and MAC address 52:54:00:7d:14:2c in network mk-multinode-007768
	I0929 11:20:11.816732   41475 host.go:66] Checking if "multinode-007768" exists ...
	I0929 11:20:11.817007   41475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:20:11.817057   41475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:20:11.829828   41475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42465
	I0929 11:20:11.830201   41475 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:20:11.830604   41475 main.go:141] libmachine: Using API Version  1
	I0929 11:20:11.830627   41475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:20:11.830941   41475 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:20:11.831147   41475 main.go:141] libmachine: (multinode-007768) Calling .DriverName
	I0929 11:20:11.831360   41475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:20:11.831385   41475 main.go:141] libmachine: (multinode-007768) Calling .GetSSHHostname
	I0929 11:20:11.834208   41475 main.go:141] libmachine: (multinode-007768) DBG | domain multinode-007768 has defined MAC address 52:54:00:7d:14:2c in network mk-multinode-007768
	I0929 11:20:11.834658   41475 main.go:141] libmachine: (multinode-007768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:14:2c", ip: ""} in network mk-multinode-007768: {Iface:virbr1 ExpiryTime:2025-09-29 12:17:46 +0000 UTC Type:0 Mac:52:54:00:7d:14:2c Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-007768 Clientid:01:52:54:00:7d:14:2c}
	I0929 11:20:11.834683   41475 main.go:141] libmachine: (multinode-007768) DBG | domain multinode-007768 has defined IP address 192.168.39.185 and MAC address 52:54:00:7d:14:2c in network mk-multinode-007768
	I0929 11:20:11.834774   41475 main.go:141] libmachine: (multinode-007768) Calling .GetSSHPort
	I0929 11:20:11.834912   41475 main.go:141] libmachine: (multinode-007768) Calling .GetSSHKeyPath
	I0929 11:20:11.835032   41475 main.go:141] libmachine: (multinode-007768) Calling .GetSSHUsername
	I0929 11:20:11.835141   41475 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/multinode-007768/id_rsa Username:docker}
	I0929 11:20:11.915934   41475 ssh_runner.go:195] Run: systemctl --version
	I0929 11:20:11.922643   41475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:20:11.940316   41475 kubeconfig.go:125] found "multinode-007768" server: "https://192.168.39.185:8443"
	I0929 11:20:11.940369   41475 api_server.go:166] Checking apiserver status ...
	I0929 11:20:11.940401   41475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:20:11.962226   41475 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1386/cgroup
	W0929 11:20:11.973982   41475 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1386/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:20:11.974053   41475 ssh_runner.go:195] Run: ls
	I0929 11:20:11.979099   41475 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0929 11:20:11.983618   41475 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0929 11:20:11.983644   41475 status.go:463] multinode-007768 apiserver status = Running (err=<nil>)
	I0929 11:20:11.983656   41475 status.go:176] multinode-007768 status: &{Name:multinode-007768 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:20:11.983674   41475 status.go:174] checking status of multinode-007768-m02 ...
	I0929 11:20:11.983988   41475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:20:11.984042   41475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:20:11.997492   41475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40997
	I0929 11:20:11.997990   41475 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:20:11.998470   41475 main.go:141] libmachine: Using API Version  1
	I0929 11:20:11.998498   41475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:20:11.998873   41475 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:20:11.999050   41475 main.go:141] libmachine: (multinode-007768-m02) Calling .GetState
	I0929 11:20:12.000592   41475 status.go:371] multinode-007768-m02 host status = "Running" (err=<nil>)
	I0929 11:20:12.000609   41475 host.go:66] Checking if "multinode-007768-m02" exists ...
	I0929 11:20:12.000898   41475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:20:12.000941   41475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:20:12.014177   41475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0929 11:20:12.014623   41475 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:20:12.015047   41475 main.go:141] libmachine: Using API Version  1
	I0929 11:20:12.015068   41475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:20:12.015393   41475 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:20:12.015562   41475 main.go:141] libmachine: (multinode-007768-m02) Calling .GetIP
	I0929 11:20:12.018794   41475 main.go:141] libmachine: (multinode-007768-m02) DBG | domain multinode-007768-m02 has defined MAC address 52:54:00:3d:53:64 in network mk-multinode-007768
	I0929 11:20:12.019276   41475 main.go:141] libmachine: (multinode-007768-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:53:64", ip: ""} in network mk-multinode-007768: {Iface:virbr1 ExpiryTime:2025-09-29 12:18:42 +0000 UTC Type:0 Mac:52:54:00:3d:53:64 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:multinode-007768-m02 Clientid:01:52:54:00:3d:53:64}
	I0929 11:20:12.019300   41475 main.go:141] libmachine: (multinode-007768-m02) DBG | domain multinode-007768-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:3d:53:64 in network mk-multinode-007768
	I0929 11:20:12.019526   41475 host.go:66] Checking if "multinode-007768-m02" exists ...
	I0929 11:20:12.019820   41475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:20:12.019855   41475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:20:12.033115   41475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36145
	I0929 11:20:12.033647   41475 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:20:12.034095   41475 main.go:141] libmachine: Using API Version  1
	I0929 11:20:12.034117   41475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:20:12.034505   41475 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:20:12.034688   41475 main.go:141] libmachine: (multinode-007768-m02) Calling .DriverName
	I0929 11:20:12.034853   41475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:20:12.034875   41475 main.go:141] libmachine: (multinode-007768-m02) Calling .GetSSHHostname
	I0929 11:20:12.037701   41475 main.go:141] libmachine: (multinode-007768-m02) DBG | domain multinode-007768-m02 has defined MAC address 52:54:00:3d:53:64 in network mk-multinode-007768
	I0929 11:20:12.038163   41475 main.go:141] libmachine: (multinode-007768-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:53:64", ip: ""} in network mk-multinode-007768: {Iface:virbr1 ExpiryTime:2025-09-29 12:18:42 +0000 UTC Type:0 Mac:52:54:00:3d:53:64 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:multinode-007768-m02 Clientid:01:52:54:00:3d:53:64}
	I0929 11:20:12.038177   41475 main.go:141] libmachine: (multinode-007768-m02) DBG | domain multinode-007768-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:3d:53:64 in network mk-multinode-007768
	I0929 11:20:12.038414   41475 main.go:141] libmachine: (multinode-007768-m02) Calling .GetSSHPort
	I0929 11:20:12.038579   41475 main.go:141] libmachine: (multinode-007768-m02) Calling .GetSSHKeyPath
	I0929 11:20:12.038731   41475 main.go:141] libmachine: (multinode-007768-m02) Calling .GetSSHUsername
	I0929 11:20:12.038892   41475 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21657-3816/.minikube/machines/multinode-007768-m02/id_rsa Username:docker}
	I0929 11:20:12.117671   41475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:20:12.137398   41475 status.go:176] multinode-007768-m02 status: &{Name:multinode-007768-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:20:12.137427   41475 status.go:174] checking status of multinode-007768-m03 ...
	I0929 11:20:12.137778   41475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:20:12.137821   41475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:20:12.151076   41475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38191
	I0929 11:20:12.151575   41475 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:20:12.152165   41475 main.go:141] libmachine: Using API Version  1
	I0929 11:20:12.152181   41475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:20:12.152545   41475 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:20:12.152758   41475 main.go:141] libmachine: (multinode-007768-m03) Calling .GetState
	I0929 11:20:12.154367   41475 status.go:371] multinode-007768-m03 host status = "Stopped" (err=<nil>)
	I0929 11:20:12.154383   41475 status.go:384] host is not running, skipping remaining checks
	I0929 11:20:12.154388   41475 status.go:176] multinode-007768-m03 status: &{Name:multinode-007768-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-007768 node start m03 -v=5 --alsologtostderr: (37.777116905s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (303.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-007768
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-007768
E0929 11:22:14.621816    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:22:25.042210    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-007768: (2m55.853120431s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-007768 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-007768 --wait=true -v=5 --alsologtostderr: (2m7.56850745s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-007768
--- PASS: TestMultiNode/serial/RestartKeepsNodes (303.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-007768 node delete m03: (2.230595239s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (164.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 stop
E0929 11:27:08.113481    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:27:14.618400    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:27:25.042707    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-007768 stop: (2m44.12666461s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-007768 status: exit status 7 (78.468786ms)

                                                
                                                
-- stdout --
	multinode-007768
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-007768-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-007768 status --alsologtostderr: exit status 7 (82.534007ms)

                                                
                                                
-- stdout --
	multinode-007768
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-007768-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:28:41.106003   44294 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:28:41.106264   44294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:28:41.106274   44294 out.go:374] Setting ErrFile to fd 2...
	I0929 11:28:41.106278   44294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:28:41.106484   44294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
	I0929 11:28:41.106646   44294 out.go:368] Setting JSON to false
	I0929 11:28:41.106690   44294 mustload.go:65] Loading cluster: multinode-007768
	I0929 11:28:41.106809   44294 notify.go:220] Checking for updates...
	I0929 11:28:41.107211   44294 config.go:182] Loaded profile config "multinode-007768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:28:41.107243   44294 status.go:174] checking status of multinode-007768 ...
	I0929 11:28:41.107781   44294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:28:41.107815   44294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:28:41.126019   44294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39315
	I0929 11:28:41.126423   44294 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:28:41.126947   44294 main.go:141] libmachine: Using API Version  1
	I0929 11:28:41.126975   44294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:28:41.127412   44294 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:28:41.127647   44294 main.go:141] libmachine: (multinode-007768) Calling .GetState
	I0929 11:28:41.129533   44294 status.go:371] multinode-007768 host status = "Stopped" (err=<nil>)
	I0929 11:28:41.129549   44294 status.go:384] host is not running, skipping remaining checks
	I0929 11:28:41.129557   44294 status.go:176] multinode-007768 status: &{Name:multinode-007768 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:28:41.129602   44294 status.go:174] checking status of multinode-007768-m02 ...
	I0929 11:28:41.129932   44294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:28:41.129980   44294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:28:41.142953   44294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41333
	I0929 11:28:41.143329   44294 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:28:41.143761   44294 main.go:141] libmachine: Using API Version  1
	I0929 11:28:41.143787   44294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:28:41.144088   44294 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:28:41.144269   44294 main.go:141] libmachine: (multinode-007768-m02) Calling .GetState
	I0929 11:28:41.145791   44294 status.go:371] multinode-007768-m02 host status = "Stopped" (err=<nil>)
	I0929 11:28:41.145804   44294 status.go:384] host is not running, skipping remaining checks
	I0929 11:28:41.145810   44294 status.go:176] multinode-007768-m02 status: &{Name:multinode-007768-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (164.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (87.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-007768 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-007768 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m26.870155574s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007768 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (87.40s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-007768
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-007768-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-007768-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (61.600866ms)

                                                
                                                
-- stdout --
	* [multinode-007768-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21657
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-007768-m02' is duplicated with machine name 'multinode-007768-m02' in profile 'multinode-007768'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-007768-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 11:30:17.685020    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-007768-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.920645338s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-007768
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-007768: exit status 80 (217.933934ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-007768 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-007768-m03 already exists in multinode-007768-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-007768-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.07s)

                                                
                                    
x
+
TestScheduledStopUnix (115.07s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-095431 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-095431 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (43.418091188s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095431 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-095431 -n scheduled-stop-095431
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095431 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0929 11:33:41.858038    7691 retry.go:31] will retry after 118.663µs: open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/scheduled-stop-095431/pid: no such file or directory
I0929 11:33:41.859267    7691 retry.go:31] will retry after 199.697µs: open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/scheduled-stop-095431/pid: no such file or directory
I0929 11:33:41.860421    7691 retry.go:31] will retry after 302.075µs: open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/scheduled-stop-095431/pid: no such file or directory
I0929 11:33:41.861560    7691 retry.go:31] will retry after 253.468µs: open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/scheduled-stop-095431/pid: no such file or directory
I0929 11:33:41.862719    7691 retry.go:31] will retry after 639.637µs: open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/scheduled-stop-095431/pid: no such file or directory
I0929 11:33:41.863844    7691 retry.go:31] will retry after 907.871µs: open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/scheduled-stop-095431/pid: no such file or directory
I0929 11:33:41.864984    7691 retry.go:31] will retry after 1.654133ms: open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/scheduled-stop-095431/pid: no such file or directory
I0929 11:33:41.867160    7691 retry.go:31] will retry after 1.952272ms: open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/scheduled-stop-095431/pid: no such file or directory
I0929 11:33:41.869391    7691 retry.go:31] will retry after 3.222968ms: open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/scheduled-stop-095431/pid: no such file or directory
I0929 11:33:41.873599    7691 retry.go:31] will retry after 2.175228ms: open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/scheduled-stop-095431/pid: no such file or directory
I0929 11:33:41.876083    7691 retry.go:31] will retry after 8.05458ms: open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/scheduled-stop-095431/pid: no such file or directory
I0929 11:33:41.884241    7691 retry.go:31] will retry after 9.928133ms: open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/scheduled-stop-095431/pid: no such file or directory
I0929 11:33:41.894486    7691 retry.go:31] will retry after 16.243145ms: open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/scheduled-stop-095431/pid: no such file or directory
I0929 11:33:41.911745    7691 retry.go:31] will retry after 10.272347ms: open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/scheduled-stop-095431/pid: no such file or directory
I0929 11:33:41.922998    7691 retry.go:31] will retry after 29.932241ms: open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/scheduled-stop-095431/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095431 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-095431 -n scheduled-stop-095431
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-095431
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095431 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-095431
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-095431: exit status 7 (66.089757ms)

                                                
                                                
-- stdout --
	scheduled-stop-095431
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-095431 -n scheduled-stop-095431
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-095431 -n scheduled-stop-095431: exit status 7 (66.261471ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-095431" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-095431
--- PASS: TestScheduledStopUnix (115.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (82.87s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.829191737 start -p running-upgrade-929554 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.829191737 start -p running-upgrade-929554 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (51.258895834s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-929554 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-929554 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (30.046170378s)
helpers_test.go:175: Cleaning up "running-upgrade-929554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-929554
--- PASS: TestRunningBinaryUpgrade (82.87s)

                                                
                                    
x
+
TestKubernetesUpgrade (192.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-197761 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-197761 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m18.978883515s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-197761
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-197761: (2.0788533s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-197761 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-197761 status --format={{.Host}}: exit status 7 (83.939309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-197761 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-197761 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.290270027s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-197761 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-197761 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-197761 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (82.263013ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-197761] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21657
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-197761
	    minikube start -p kubernetes-upgrade-197761 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1977612 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-197761 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-197761 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-197761 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m8.405641594s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-197761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-197761
--- PASS: TestKubernetesUpgrade (192.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestPause/serial/Start (106.8s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-869600 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-869600 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m46.804455633s)
--- PASS: TestPause/serial/Start (106.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (141.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2356422041 start -p stopped-upgrade-880748 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2356422041 start -p stopped-upgrade-880748 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m42.708570217s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2356422041 -p stopped-upgrade-880748 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2356422041 -p stopped-upgrade-880748 stop: (1.982553775s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-880748 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-880748 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (36.817667136s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (141.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-880748
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-880748: (1.157600454s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-718180 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-718180 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (103.651466ms)

                                                
                                                
-- stdout --
	* [false-718180] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21657
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:38:33.163176   51938 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:38:33.163435   51938 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:38:33.163445   51938 out.go:374] Setting ErrFile to fd 2...
	I0929 11:38:33.163451   51938 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:38:33.163651   51938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3816/.minikube/bin
	I0929 11:38:33.164186   51938 out.go:368] Setting JSON to false
	I0929 11:38:33.165084   51938 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4858,"bootTime":1759141055,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:38:33.165163   51938 start.go:140] virtualization: kvm guest
	I0929 11:38:33.167570   51938 out.go:179] * [false-718180] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:38:33.169080   51938 notify.go:220] Checking for updates...
	I0929 11:38:33.169112   51938 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 11:38:33.170422   51938 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:38:33.171854   51938 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	I0929 11:38:33.172990   51938 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	I0929 11:38:33.174251   51938 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:38:33.175960   51938 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:38:33.177672   51938 config.go:182] Loaded profile config "cert-expiration-415186": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:38:33.177817   51938 config.go:182] Loaded profile config "cert-options-424773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:38:33.177954   51938 config.go:182] Loaded profile config "kubernetes-upgrade-197761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:38:33.178076   51938 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:38:33.215124   51938 out.go:179] * Using the kvm2 driver based on user configuration
	I0929 11:38:33.216608   51938 start.go:304] selected driver: kvm2
	I0929 11:38:33.216627   51938 start.go:924] validating driver "kvm2" against <nil>
	I0929 11:38:33.216642   51938 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:38:33.218503   51938 out.go:203] 
	W0929 11:38:33.219997   51938 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0929 11:38:33.221331   51938 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-718180 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-718180

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-718180

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-718180

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-718180

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-718180

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-718180

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-718180

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-718180

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-718180

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-718180

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-718180

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-718180" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-718180" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:38:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.205:8443
name: cert-expiration-415186
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:37:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.6:8443
name: kubernetes-upgrade-197761
contexts:
- context:
cluster: cert-expiration-415186
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:38:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-415186
name: cert-expiration-415186
- context:
cluster: kubernetes-upgrade-197761
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:37:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-197761
name: kubernetes-upgrade-197761
current-context: ""
kind: Config
users:
- name: cert-expiration-415186
user:
client-certificate: /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/cert-expiration-415186/client.crt
client-key: /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/cert-expiration-415186/client.key
- name: kubernetes-upgrade-197761
user:
client-certificate: /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kubernetes-upgrade-197761/client.crt
client-key: /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kubernetes-upgrade-197761/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-718180

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-718180"

                                                
                                                
----------------------- debugLogs end: false-718180 [took: 2.783749837s] --------------------------------
helpers_test.go:175: Cleaning up "false-718180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-718180
--- PASS: TestNetworkPlugins/group/false (3.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-501083 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-501083 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (70.617196ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-501083] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21657
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21657-3816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-501083 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-501083 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (44.087074456s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-501083 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (108.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-718180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-718180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m48.161390648s)
--- PASS: TestNetworkPlugins/group/auto/Start (108.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (31.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-501083 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-501083 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (30.341024734s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-501083 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-501083 status -o json: exit status 2 (259.725626ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-501083","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-501083
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (31.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (24.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-501083 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-501083 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (24.725328838s)
--- PASS: TestNoKubernetes/serial/Start (24.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (96.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-718180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-718180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m36.038629791s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (96.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-501083 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-501083 "sudo systemctl is-active --quiet service kubelet": exit status 1 (208.006112ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-501083
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-501083: (1.357066929s)
--- PASS: TestNoKubernetes/serial/Stop (1.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (35.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-501083 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-501083 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (35.540934988s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (35.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-718180 "pgrep -a kubelet"
I0929 11:40:50.106265    7691 config.go:182] Loaded profile config "auto-718180": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-718180 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rwqt2" [2771d421-a560-4952-882c-a4cce4cc2331] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rwqt2" [2771d421-a560-4952-882c-a4cce4cc2331] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004764704s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-501083 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-501083 "sudo systemctl is-active --quiet service kubelet": exit status 1 (206.051548ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-718180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-718180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m13.008784607s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-718180 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-718180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-718180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (83.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-718180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-718180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m23.42251306s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (83.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (92.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-718180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-718180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m32.742592893s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (92.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-6mscr" [0490fb83-5b30-4228-beca-f8791f3ec9d9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006447431s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-718180 "pgrep -a kubelet"
I0929 11:41:55.643650    7691 config.go:182] Loaded profile config "kindnet-718180": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-718180 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fhwlc" [97bebfe8-7c8d-4f26-b4e0-e655b5056aa9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fhwlc" [97bebfe8-7c8d-4f26-b4e0-e655b5056aa9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.005479538s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-718180 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-718180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-718180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-tflcx" [90dbc8c0-6850-4b22-b14d-6d1fe795d290] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E0929 11:42:14.613097    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "calico-node-tflcx" [90dbc8c0-6850-4b22-b14d-6d1fe795d290] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.151555262s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-718180 "pgrep -a kubelet"
I0929 11:42:19.671065    7691 config.go:182] Loaded profile config "calico-718180": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-718180 replace --force -f testdata/netcat-deployment.yaml
I0929 11:42:20.602018    7691 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0929 11:42:20.636411    7691 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m22r5" [cca59d9a-9097-48f1-afb8-813d01370096] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0929 11:42:25.042330    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-m22r5" [cca59d9a-9097-48f1-afb8-813d01370096] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004172783s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (78.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-718180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-718180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m18.794524035s)
--- PASS: TestNetworkPlugins/group/flannel/Start (78.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-718180 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-718180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-718180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-718180 "pgrep -a kubelet"
I0929 11:42:41.865872    7691 config.go:182] Loaded profile config "custom-flannel-718180": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-718180 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-47pln" [637db4e3-352a-43dc-ab19-148b6480305f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-47pln" [637db4e3-352a-43dc-ab19-148b6480305f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003179282s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (66.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-718180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-718180 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m6.929201636s)
--- PASS: TestNetworkPlugins/group/bridge/Start (66.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-718180 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-718180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-718180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (105.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-333592 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-333592 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m45.231800013s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (105.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-718180 "pgrep -a kubelet"
I0929 11:43:20.371440    7691 config.go:182] Loaded profile config "enable-default-cni-718180": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-718180 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xq4t2" [66f18f3b-561f-443e-a96c-0123e469c2f1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xq4t2" [66f18f3b-561f-443e-a96c-0123e469c2f1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.006058939s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-718180 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-718180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-718180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-6pvgr" [05ba04e1-9ed6-475f-b1f7-4151c857e20d] Running
E0929 11:43:48.115760    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005507367s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (106.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-205008 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-205008 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m46.098171826s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (106.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-718180 "pgrep -a kubelet"
I0929 11:43:53.356473    7691 config.go:182] Loaded profile config "flannel-718180": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-718180 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-75rvt" [bd00e64e-41ad-4f00-87ce-8b298b99d2d7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-75rvt" [bd00e64e-41ad-4f00-87ce-8b298b99d2d7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.005212496s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-718180 "pgrep -a kubelet"
I0929 11:43:57.954550    7691 config.go:182] Loaded profile config "bridge-718180": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-718180 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lqhnc" [61e94ddd-08f6-4f11-87dd-5749b339dcfd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lqhnc" [61e94ddd-08f6-4f11-87dd-5749b339dcfd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.006026455s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-718180 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-718180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-718180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-718180 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-718180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-718180 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-209445 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-209445 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m26.382987356s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (108.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-810925 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-810925 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m48.657080543s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (108.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-333592 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [cb958b6e-1d07-4d94-8703-71bacb1ae45b] Pending
helpers_test.go:352: "busybox" [cb958b6e-1d07-4d94-8703-71bacb1ae45b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [cb958b6e-1d07-4d94-8703-71bacb1ae45b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004892129s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-333592 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-333592 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-333592 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.3060224s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-333592 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (80.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-333592 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-333592 --alsologtostderr -v=3: (1m20.659208866s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (80.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-205008 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3236b94f-c672-4d6a-973a-9467fdd14c04] Pending
helpers_test.go:352: "busybox" [3236b94f-c672-4d6a-973a-9467fdd14c04] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3236b94f-c672-4d6a-973a-9467fdd14c04] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004901418s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-205008 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-205008 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-205008 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (84.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-205008 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-205008 --alsologtostderr -v=3: (1m24.038475394s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (84.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-209445 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [72428df5-a3ce-4242-a150-985ea2b5e2e1] Pending
E0929 11:45:50.364053    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/auto-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:45:50.370432    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/auto-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:45:50.381795    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/auto-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:45:50.403233    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/auto-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:45:50.444660    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/auto-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:45:50.526090    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/auto-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:45:50.687837    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/auto-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:45:51.009564    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/auto-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [72428df5-a3ce-4242-a150-985ea2b5e2e1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0929 11:45:51.651263    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/auto-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:45:52.933041    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/auto-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [72428df5-a3ce-4242-a150-985ea2b5e2e1] Running
E0929 11:45:55.494832    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/auto-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003916163s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-209445 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-209445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-209445 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (82.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-209445 --alsologtostderr -v=3
E0929 11:46:00.616154    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/auto-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:10.857801    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/auto-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-209445 --alsologtostderr -v=3: (1m22.146237599s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (82.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-810925 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e8810b07-bd10-4462-b257-ef32f4dbd560] Pending
helpers_test.go:352: "busybox" [e8810b07-bd10-4462-b257-ef32f4dbd560] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e8810b07-bd10-4462-b257-ef32f4dbd560] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003760483s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-810925 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-810925 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-810925 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (83.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-810925 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-810925 --alsologtostderr -v=3: (1m23.372128659s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (83.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-333592 -n old-k8s-version-333592
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-333592 -n old-k8s-version-333592: exit status 7 (65.956986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-333592 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-333592 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
E0929 11:46:31.339121    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/auto-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:49.380544    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kindnet-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:49.386996    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kindnet-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:49.398414    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kindnet-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:49.419857    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kindnet-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:49.461444    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kindnet-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:49.542957    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kindnet-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:49.704571    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kindnet-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:50.026264    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kindnet-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:50.668050    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kindnet-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:51.949708    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kindnet-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:54.511806    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kindnet-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:57.687278    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:59.634108    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kindnet-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-333592 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (45.703597492s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-333592 -n old-k8s-version-333592
E0929 11:47:14.465029    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/calico-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:14.613628    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/functional-960153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-205008 -n no-preload-205008
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-205008 -n no-preload-205008: exit status 7 (63.964158ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-205008 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (61.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-205008 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
E0929 11:47:09.875857    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kindnet-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:12.300794    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/auto-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:13.176521    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/calico-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:13.182928    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/calico-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:13.194416    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/calico-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:13.215868    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/calico-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:13.257281    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/calico-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:13.339515    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/calico-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:13.501176    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/calico-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:13.822689    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/calico-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-205008 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m1.115540921s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-205008 -n no-preload-205008
E0929 11:48:11.319009    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kindnet-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (61.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-lttq8" [c936c921-3189-4c1a-8aa1-32cac91fcc4d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 11:47:15.746599    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/calico-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:18.308857    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/calico-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-lttq8" [c936c921-3189-4c1a-8aa1-32cac91fcc4d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.0045298s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-209445 -n embed-certs-209445
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-209445 -n embed-certs-209445: exit status 7 (76.296971ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-209445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-209445 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
E0929 11:47:23.430580    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/calico-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:25.042418    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/addons-911532/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-209445 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (51.919610844s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-209445 -n embed-certs-209445
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-lttq8" [c936c921-3189-4c1a-8aa1-32cac91fcc4d] Running
E0929 11:47:30.357552    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kindnet-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004912406s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-333592 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-333592 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-333592 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-333592 -n old-k8s-version-333592
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-333592 -n old-k8s-version-333592: exit status 2 (267.34719ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-333592 -n old-k8s-version-333592
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-333592 -n old-k8s-version-333592: exit status 2 (278.143181ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-333592 --alsologtostderr -v=1
E0929 11:47:33.672580    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/calico-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-333592 -n old-k8s-version-333592
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-333592 -n old-k8s-version-333592
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-052656 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
E0929 11:47:42.168582    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/custom-flannel-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:42.175099    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/custom-flannel-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:42.186559    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/custom-flannel-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:42.208022    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/custom-flannel-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:42.249495    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/custom-flannel-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:42.330930    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/custom-flannel-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:42.492560    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/custom-flannel-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:42.814548    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/custom-flannel-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:43.456434    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/custom-flannel-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:44.738172    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/custom-flannel-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:47.300306    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/custom-flannel-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-052656 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (59.626737039s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-810925 -n default-k8s-diff-port-810925
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-810925 -n default-k8s-diff-port-810925: exit status 7 (80.569636ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-810925 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (66.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-810925 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
E0929 11:47:52.422149    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/custom-flannel-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:54.154267    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/calico-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:02.663470    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/custom-flannel-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-810925 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m5.982972875s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-810925 -n default-k8s-diff-port-810925
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (66.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jgj8c" [0372aa6b-e20e-4be4-a4be-b1723b2bde7e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004950442s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-s47s8" [ed42ec13-93ba-4373-ba8e-cd63e40dbd7e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-s47s8" [ed42ec13-93ba-4373-ba8e-cd63e40dbd7e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.004676481s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jgj8c" [0372aa6b-e20e-4be4-a4be-b1723b2bde7e] Running
E0929 11:48:20.651127    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/enable-default-cni-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:20.657666    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/enable-default-cni-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:20.669876    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/enable-default-cni-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:20.691438    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/enable-default-cni-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:20.732978    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/enable-default-cni-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:20.814503    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/enable-default-cni-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:20.976224    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/enable-default-cni-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:21.298045    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/enable-default-cni-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:21.939795    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/enable-default-cni-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004447338s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-205008 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-205008 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-205008 --alsologtostderr -v=1
E0929 11:48:23.145592    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/custom-flannel-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:23.222029    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/enable-default-cni-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-205008 --alsologtostderr -v=1: (1.113672362s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-205008 -n no-preload-205008
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-205008 -n no-preload-205008: exit status 2 (304.747809ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-205008 -n no-preload-205008
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-205008 -n no-preload-205008: exit status 2 (290.830227ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-205008 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-205008 -n no-preload-205008
E0929 11:48:25.783799    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/enable-default-cni-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-205008 -n no-preload-205008
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-s47s8" [ed42ec13-93ba-4373-ba8e-cd63e40dbd7e] Running
E0929 11:48:30.905381    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/enable-default-cni-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003769679s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-209445 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-209445 image list --format=json
E0929 11:48:34.222134    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/auto-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-209445 --alsologtostderr -v=1
E0929 11:48:35.116249    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/calico-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-209445 --alsologtostderr -v=1: (1.356231478s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-209445 -n embed-certs-209445
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-209445 -n embed-certs-209445: exit status 2 (326.766691ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-209445 -n embed-certs-209445
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-209445 -n embed-certs-209445: exit status 2 (340.701545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-209445 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-209445 --alsologtostderr -v=1: (1.073924451s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-209445 -n embed-certs-209445
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-209445 -n embed-certs-209445
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-052656 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-052656 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.481856409s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-052656 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-052656 --alsologtostderr -v=3: (11.039888981s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-052656 -n newest-cni-052656
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-052656 -n newest-cni-052656: exit status 7 (64.956768ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-052656 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-052656 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
E0929 11:48:49.689444    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/flannel-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:52.250803    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/flannel-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-052656 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (35.073183131s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-052656 -n newest-cni-052656
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bhs7f" [354c2f57-a6ac-4bce-a93c-05c425758a65] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 11:48:57.373057    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/flannel-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:58.405675    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/bridge-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:58.412994    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/bridge-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:58.424487    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/bridge-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:58.446040    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/bridge-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:58.487526    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/bridge-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:58.569061    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/bridge-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:58.730719    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/bridge-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:59.052498    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/bridge-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:48:59.693797    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/bridge-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bhs7f" [354c2f57-a6ac-4bce-a93c-05c425758a65] Running
E0929 11:49:00.975487    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/bridge-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:49:01.629629    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/enable-default-cni-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:49:03.537779    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/bridge-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:49:04.107716    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/custom-flannel-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004268574s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bhs7f" [354c2f57-a6ac-4bce-a93c-05c425758a65] Running
E0929 11:49:07.615028    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/flannel-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:49:08.659772    7691 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/bridge-718180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003606458s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-810925 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-810925 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-810925 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-810925 --alsologtostderr -v=1: (1.230064108s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-810925 -n default-k8s-diff-port-810925
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-810925 -n default-k8s-diff-port-810925: exit status 2 (252.670249ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-810925 -n default-k8s-diff-port-810925
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-810925 -n default-k8s-diff-port-810925: exit status 2 (254.283224ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-810925 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-810925 --alsologtostderr -v=1: (1.097150978s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-810925 -n default-k8s-diff-port-810925
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-810925 -n default-k8s-diff-port-810925
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-052656 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-052656 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-052656 -n newest-cni-052656
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-052656 -n newest-cni-052656: exit status 2 (240.133436ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-052656 -n newest-cni-052656
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-052656 -n newest-cni-052656: exit status 2 (235.536204ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-052656 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-052656 -n newest-cni-052656
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-052656 -n newest-cni-052656
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.53s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.0/cached-images 0
15 TestDownloadOnly/v1.34.0/binaries 0
16 TestDownloadOnly/v1.34.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
262 TestNetworkPlugins/group/kubenet 2.8
270 TestNetworkPlugins/group/cilium 3.72
279 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-911532 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-718180 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-718180

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-718180

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-718180

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-718180

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-718180

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-718180

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-718180

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-718180

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-718180

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-718180

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-718180

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-718180" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-718180" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:38:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.205:8443
name: cert-expiration-415186
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:37:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.6:8443
name: kubernetes-upgrade-197761
contexts:
- context:
cluster: cert-expiration-415186
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:38:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-415186
name: cert-expiration-415186
- context:
cluster: kubernetes-upgrade-197761
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:37:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-197761
name: kubernetes-upgrade-197761
current-context: ""
kind: Config
users:
- name: cert-expiration-415186
user:
client-certificate: /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/cert-expiration-415186/client.crt
client-key: /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/cert-expiration-415186/client.key
- name: kubernetes-upgrade-197761
user:
client-certificate: /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kubernetes-upgrade-197761/client.crt
client-key: /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kubernetes-upgrade-197761/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-718180

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-718180"

                                                
                                                
----------------------- debugLogs end: kubenet-718180 [took: 2.659078078s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-718180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-718180
--- SKIP: TestNetworkPlugins/group/kubenet (2.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-718180 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-718180

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-718180

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-718180

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-718180

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-718180

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-718180

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-718180

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-718180

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-718180

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-718180

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-718180

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-718180" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-718180

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-718180

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-718180

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-718180

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-718180" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-718180" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:38:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.205:8443
name: cert-expiration-415186
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21657-3816/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:37:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.6:8443
name: kubernetes-upgrade-197761
contexts:
- context:
cluster: cert-expiration-415186
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:38:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-415186
name: cert-expiration-415186
- context:
cluster: kubernetes-upgrade-197761
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:37:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-197761
name: kubernetes-upgrade-197761
current-context: ""
kind: Config
users:
- name: cert-expiration-415186
user:
client-certificate: /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/cert-expiration-415186/client.crt
client-key: /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/cert-expiration-415186/client.key
- name: kubernetes-upgrade-197761
user:
client-certificate: /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kubernetes-upgrade-197761/client.crt
client-key: /home/jenkins/minikube-integration/21657-3816/.minikube/profiles/kubernetes-upgrade-197761/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-718180

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-718180" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-718180"

                                                
                                                
----------------------- debugLogs end: cilium-718180 [took: 3.552021399s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-718180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-718180
--- SKIP: TestNetworkPlugins/group/cilium (3.72s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-477952" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-477952
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard