Test Report: KVM_Linux_crio 21642

                    
                      14b81faeac061460adc41f1c17794999a5c5cccd:2025-09-27:41636
                    
                

Test fail (14/324)

x
+
TestAddons/parallel/Ingress (492.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-330674 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-330674 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-330674 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [cf3126e1-0cb8-4c12-8028-997b82450384] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-330674 -n addons-330674
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-09-26 22:40:07.218833222 +0000 UTC m=+673.049232683
addons_test.go:252: (dbg) Run:  kubectl --context addons-330674 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-330674 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-330674/192.168.39.36
Start Time:       Fri, 26 Sep 2025 22:32:06 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.28
IPs:
IP:  10.244.0.28
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xvdz7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-xvdz7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  8m1s                  default-scheduler  Successfully assigned default/nginx to addons-330674
Normal   BackOff    47s (x11 over 7m29s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     47s (x11 over 7m29s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    33s (x5 over 8m)      kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     3s (x5 over 7m30s)    kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3s (x5 over 7m30s)    kubelet            Error: ErrImagePull
addons_test.go:252: (dbg) Run:  kubectl --context addons-330674 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-330674 logs nginx -n default: exit status 1 (71.409547ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-330674 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-330674 -n addons-330674
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-330674 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-330674 logs -n 25: (1.354122056s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ delete  │ -p download-only-123956                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-123956 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ delete  │ -p download-only-957403                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-957403 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ delete  │ -p download-only-123956                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-123956 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ start   │ --download-only -p binary-mirror-019280 --alsologtostderr --binary-mirror http://127.0.0.1:43721 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-019280 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ delete  │ -p binary-mirror-019280                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-019280 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ addons  │ enable dashboard -p addons-330674                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-330674                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ start   │ -p addons-330674 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:31 UTC │
	│ addons  │ addons-330674 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:31 UTC │ 26 Sep 25 22:31 UTC │
	│ addons  │ addons-330674 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:31 UTC │ 26 Sep 25 22:31 UTC │
	│ addons  │ enable headlamp -p addons-330674 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:31 UTC │ 26 Sep 25 22:31 UTC │
	│ addons  │ addons-330674 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:31 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-330674 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:31 UTC │ 26 Sep 25 22:31 UTC │
	│ addons  │ addons-330674 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ ip      │ addons-330674 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-330674 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-330674 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-330674 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-330674                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-330674 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-330674 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-330674 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	│ addons  │ addons-330674 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:38 UTC │ 26 Sep 25 22:38 UTC │
	│ addons  │ addons-330674 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:38 UTC │ 26 Sep 25 22:38 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:29:07
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:29:07.131240   10530 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:29:07.131540   10530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:07.131551   10530 out.go:374] Setting ErrFile to fd 2...
	I0926 22:29:07.131555   10530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:07.131846   10530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	I0926 22:29:07.132459   10530 out.go:368] Setting JSON to false
	I0926 22:29:07.133384   10530 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":692,"bootTime":1758925055,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:29:07.133472   10530 start.go:140] virtualization: kvm guest
	I0926 22:29:07.135388   10530 out.go:179] * [addons-330674] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:29:07.136853   10530 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:29:07.136850   10530 notify.go:220] Checking for updates...
	I0926 22:29:07.138284   10530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:29:07.139566   10530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 22:29:07.140695   10530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 22:29:07.142048   10530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:29:07.143327   10530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:29:07.144805   10530 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:29:07.174434   10530 out.go:179] * Using the kvm2 driver based on user configuration
	I0926 22:29:07.175943   10530 start.go:304] selected driver: kvm2
	I0926 22:29:07.175964   10530 start.go:924] validating driver "kvm2" against <nil>
	I0926 22:29:07.175981   10530 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:29:07.176689   10530 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 22:29:07.176795   10530 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 22:29:07.190390   10530 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 22:29:07.190423   10530 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 22:29:07.204480   10530 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 22:29:07.204525   10530 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 22:29:07.204841   10530 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 22:29:07.204881   10530 cni.go:84] Creating CNI manager for ""
	I0926 22:29:07.204938   10530 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 22:29:07.204949   10530 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 22:29:07.205010   10530 start.go:348] cluster config:
	{Name:addons-330674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-330674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:29:07.205117   10530 iso.go:125] acquiring lock: {Name:mk665cb8117fd96bfc46b1e5a29611848cf59d97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 22:29:07.206957   10530 out.go:179] * Starting "addons-330674" primary control-plane node in "addons-330674" cluster
	I0926 22:29:07.208231   10530 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 22:29:07.208282   10530 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0926 22:29:07.208298   10530 cache.go:58] Caching tarball of preloaded images
	I0926 22:29:07.208403   10530 preload.go:172] Found /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0926 22:29:07.208418   10530 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0926 22:29:07.208880   10530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/config.json ...
	I0926 22:29:07.208921   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/config.json: {Name:mk7658ee06b88bc4bb74708f21dcb24d049f1fa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:07.209105   10530 start.go:360] acquireMachinesLock for addons-330674: {Name:mk2abc374bcfc09d0b998f1b70bb443182c23d46 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 22:29:07.209167   10530 start.go:364] duration metric: took 45.106µs to acquireMachinesLock for "addons-330674"
	I0926 22:29:07.209187   10530 start.go:93] Provisioning new machine with config: &{Name:addons-330674 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:addons-330674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 22:29:07.209253   10530 start.go:125] createHost starting for "" (driver="kvm2")
	I0926 22:29:07.210855   10530 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0926 22:29:07.210999   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:07.211043   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:07.224060   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39749
	I0926 22:29:07.224551   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:07.225094   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:07.225117   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:07.225449   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:07.225645   10530 main.go:141] libmachine: (addons-330674) Calling .GetMachineName
	I0926 22:29:07.225795   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:07.225959   10530 start.go:159] libmachine.API.Create for "addons-330674" (driver="kvm2")
	I0926 22:29:07.225987   10530 client.go:168] LocalClient.Create starting
	I0926 22:29:07.226026   10530 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem
	I0926 22:29:07.252167   10530 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem
	I0926 22:29:07.383695   10530 main.go:141] libmachine: Running pre-create checks...
	I0926 22:29:07.383717   10530 main.go:141] libmachine: (addons-330674) Calling .PreCreateCheck
	I0926 22:29:07.384236   10530 main.go:141] libmachine: (addons-330674) Calling .GetConfigRaw
	I0926 22:29:07.384647   10530 main.go:141] libmachine: Creating machine...
	I0926 22:29:07.384660   10530 main.go:141] libmachine: (addons-330674) Calling .Create
	I0926 22:29:07.384806   10530 main.go:141] libmachine: (addons-330674) creating domain...
	I0926 22:29:07.384837   10530 main.go:141] libmachine: (addons-330674) creating network...
	I0926 22:29:07.386337   10530 main.go:141] libmachine: (addons-330674) DBG | found existing default network
	I0926 22:29:07.386536   10530 main.go:141] libmachine: (addons-330674) DBG | <network>
	I0926 22:29:07.386551   10530 main.go:141] libmachine: (addons-330674) DBG |   <name>default</name>
	I0926 22:29:07.386561   10530 main.go:141] libmachine: (addons-330674) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0926 22:29:07.386567   10530 main.go:141] libmachine: (addons-330674) DBG |   <forward mode='nat'>
	I0926 22:29:07.386576   10530 main.go:141] libmachine: (addons-330674) DBG |     <nat>
	I0926 22:29:07.386584   10530 main.go:141] libmachine: (addons-330674) DBG |       <port start='1024' end='65535'/>
	I0926 22:29:07.386593   10530 main.go:141] libmachine: (addons-330674) DBG |     </nat>
	I0926 22:29:07.386600   10530 main.go:141] libmachine: (addons-330674) DBG |   </forward>
	I0926 22:29:07.386609   10530 main.go:141] libmachine: (addons-330674) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0926 22:29:07.386624   10530 main.go:141] libmachine: (addons-330674) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0926 22:29:07.386674   10530 main.go:141] libmachine: (addons-330674) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0926 22:29:07.386695   10530 main.go:141] libmachine: (addons-330674) DBG |     <dhcp>
	I0926 22:29:07.386722   10530 main.go:141] libmachine: (addons-330674) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0926 22:29:07.386749   10530 main.go:141] libmachine: (addons-330674) DBG |     </dhcp>
	I0926 22:29:07.386765   10530 main.go:141] libmachine: (addons-330674) DBG |   </ip>
	I0926 22:29:07.386773   10530 main.go:141] libmachine: (addons-330674) DBG | </network>
	I0926 22:29:07.386781   10530 main.go:141] libmachine: (addons-330674) DBG | 
	I0926 22:29:07.387226   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:07.387079   10558 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000136b0}
	I0926 22:29:07.387252   10530 main.go:141] libmachine: (addons-330674) DBG | defining private network:
	I0926 22:29:07.387264   10530 main.go:141] libmachine: (addons-330674) DBG | 
	I0926 22:29:07.387271   10530 main.go:141] libmachine: (addons-330674) DBG | <network>
	I0926 22:29:07.387280   10530 main.go:141] libmachine: (addons-330674) DBG |   <name>mk-addons-330674</name>
	I0926 22:29:07.387287   10530 main.go:141] libmachine: (addons-330674) DBG |   <dns enable='no'/>
	I0926 22:29:07.387305   10530 main.go:141] libmachine: (addons-330674) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0926 22:29:07.387341   10530 main.go:141] libmachine: (addons-330674) DBG |     <dhcp>
	I0926 22:29:07.387364   10530 main.go:141] libmachine: (addons-330674) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0926 22:29:07.387386   10530 main.go:141] libmachine: (addons-330674) DBG |     </dhcp>
	I0926 22:29:07.387410   10530 main.go:141] libmachine: (addons-330674) DBG |   </ip>
	I0926 22:29:07.387419   10530 main.go:141] libmachine: (addons-330674) DBG | </network>
	I0926 22:29:07.387423   10530 main.go:141] libmachine: (addons-330674) DBG | 
	I0926 22:29:07.393131   10530 main.go:141] libmachine: (addons-330674) DBG | creating private network mk-addons-330674 192.168.39.0/24...
	I0926 22:29:07.460176   10530 main.go:141] libmachine: (addons-330674) DBG | private network mk-addons-330674 192.168.39.0/24 created
	I0926 22:29:07.460404   10530 main.go:141] libmachine: (addons-330674) DBG | <network>
	I0926 22:29:07.460423   10530 main.go:141] libmachine: (addons-330674) DBG |   <name>mk-addons-330674</name>
	I0926 22:29:07.460433   10530 main.go:141] libmachine: (addons-330674) setting up store path in /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674 ...
	I0926 22:29:07.460457   10530 main.go:141] libmachine: (addons-330674) DBG |   <uuid>e70fd5af-70d4-4d49-913b-79a95d8fca9c</uuid>
	I0926 22:29:07.460472   10530 main.go:141] libmachine: (addons-330674) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I0926 22:29:07.460480   10530 main.go:141] libmachine: (addons-330674) DBG |   <mac address='52:54:00:a6:90:55'/>
	I0926 22:29:07.460493   10530 main.go:141] libmachine: (addons-330674) DBG |   <dns enable='no'/>
	I0926 22:29:07.460501   10530 main.go:141] libmachine: (addons-330674) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0926 22:29:07.460527   10530 main.go:141] libmachine: (addons-330674) building disk image from file:///home/jenkins/minikube-integration/21642-6020/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0926 22:29:07.460539   10530 main.go:141] libmachine: (addons-330674) DBG |     <dhcp>
	I0926 22:29:07.460549   10530 main.go:141] libmachine: (addons-330674) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0926 22:29:07.460556   10530 main.go:141] libmachine: (addons-330674) DBG |     </dhcp>
	I0926 22:29:07.460567   10530 main.go:141] libmachine: (addons-330674) DBG |   </ip>
	I0926 22:29:07.460574   10530 main.go:141] libmachine: (addons-330674) DBG | </network>
	I0926 22:29:07.460593   10530 main.go:141] libmachine: (addons-330674) Downloading /home/jenkins/minikube-integration/21642-6020/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21642-6020/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I0926 22:29:07.460625   10530 main.go:141] libmachine: (addons-330674) DBG | 
	I0926 22:29:07.460644   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:07.460403   10558 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 22:29:07.709924   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:07.709791   10558 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa...
	I0926 22:29:08.463909   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:08.463682   10558 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/addons-330674.rawdisk...
	I0926 22:29:08.463957   10530 main.go:141] libmachine: (addons-330674) setting executable bit set on /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674 (perms=drwx------)
	I0926 22:29:08.463983   10530 main.go:141] libmachine: (addons-330674) DBG | Writing magic tar header
	I0926 22:29:08.463998   10530 main.go:141] libmachine: (addons-330674) DBG | Writing SSH key tar header
	I0926 22:29:08.464006   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:08.463801   10558 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674 ...
	I0926 22:29:08.464019   10530 main.go:141] libmachine: (addons-330674) setting executable bit set on /home/jenkins/minikube-integration/21642-6020/.minikube/machines (perms=drwxr-xr-x)
	I0926 22:29:08.464034   10530 main.go:141] libmachine: (addons-330674) setting executable bit set on /home/jenkins/minikube-integration/21642-6020/.minikube (perms=drwxr-xr-x)
	I0926 22:29:08.464052   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674
	I0926 22:29:08.464064   10530 main.go:141] libmachine: (addons-330674) setting executable bit set on /home/jenkins/minikube-integration/21642-6020 (perms=drwxrwxr-x)
	I0926 22:29:08.464074   10530 main.go:141] libmachine: (addons-330674) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0926 22:29:08.464080   10530 main.go:141] libmachine: (addons-330674) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0926 22:29:08.464099   10530 main.go:141] libmachine: (addons-330674) defining domain...
	I0926 22:29:08.464155   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21642-6020/.minikube/machines
	I0926 22:29:08.464176   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 22:29:08.464184   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21642-6020
	I0926 22:29:08.464190   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0926 22:29:08.464208   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home/jenkins
	I0926 22:29:08.464242   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home
	I0926 22:29:08.464263   10530 main.go:141] libmachine: (addons-330674) DBG | skipping /home - not owner
	I0926 22:29:08.465374   10530 main.go:141] libmachine: (addons-330674) defining domain using XML: 
	I0926 22:29:08.465403   10530 main.go:141] libmachine: (addons-330674) <domain type='kvm'>
	I0926 22:29:08.465410   10530 main.go:141] libmachine: (addons-330674)   <name>addons-330674</name>
	I0926 22:29:08.465415   10530 main.go:141] libmachine: (addons-330674)   <memory unit='MiB'>4096</memory>
	I0926 22:29:08.465420   10530 main.go:141] libmachine: (addons-330674)   <vcpu>2</vcpu>
	I0926 22:29:08.465424   10530 main.go:141] libmachine: (addons-330674)   <features>
	I0926 22:29:08.465428   10530 main.go:141] libmachine: (addons-330674)     <acpi/>
	I0926 22:29:08.465432   10530 main.go:141] libmachine: (addons-330674)     <apic/>
	I0926 22:29:08.465438   10530 main.go:141] libmachine: (addons-330674)     <pae/>
	I0926 22:29:08.465444   10530 main.go:141] libmachine: (addons-330674)   </features>
	I0926 22:29:08.465449   10530 main.go:141] libmachine: (addons-330674)   <cpu mode='host-passthrough'>
	I0926 22:29:08.465453   10530 main.go:141] libmachine: (addons-330674)   </cpu>
	I0926 22:29:08.465458   10530 main.go:141] libmachine: (addons-330674)   <os>
	I0926 22:29:08.465462   10530 main.go:141] libmachine: (addons-330674)     <type>hvm</type>
	I0926 22:29:08.465467   10530 main.go:141] libmachine: (addons-330674)     <boot dev='cdrom'/>
	I0926 22:29:08.465471   10530 main.go:141] libmachine: (addons-330674)     <boot dev='hd'/>
	I0926 22:29:08.465481   10530 main.go:141] libmachine: (addons-330674)     <bootmenu enable='no'/>
	I0926 22:29:08.465491   10530 main.go:141] libmachine: (addons-330674)   </os>
	I0926 22:29:08.465499   10530 main.go:141] libmachine: (addons-330674)   <devices>
	I0926 22:29:08.465506   10530 main.go:141] libmachine: (addons-330674)     <disk type='file' device='cdrom'>
	I0926 22:29:08.465541   10530 main.go:141] libmachine: (addons-330674)       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/boot2docker.iso'/>
	I0926 22:29:08.465556   10530 main.go:141] libmachine: (addons-330674)       <target dev='hdc' bus='scsi'/>
	I0926 22:29:08.465565   10530 main.go:141] libmachine: (addons-330674)       <readonly/>
	I0926 22:29:08.465571   10530 main.go:141] libmachine: (addons-330674)     </disk>
	I0926 22:29:08.465580   10530 main.go:141] libmachine: (addons-330674)     <disk type='file' device='disk'>
	I0926 22:29:08.465592   10530 main.go:141] libmachine: (addons-330674)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0926 22:29:08.465600   10530 main.go:141] libmachine: (addons-330674)       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/addons-330674.rawdisk'/>
	I0926 22:29:08.465607   10530 main.go:141] libmachine: (addons-330674)       <target dev='hda' bus='virtio'/>
	I0926 22:29:08.465612   10530 main.go:141] libmachine: (addons-330674)     </disk>
	I0926 22:29:08.465616   10530 main.go:141] libmachine: (addons-330674)     <interface type='network'>
	I0926 22:29:08.465624   10530 main.go:141] libmachine: (addons-330674)       <source network='mk-addons-330674'/>
	I0926 22:29:08.465630   10530 main.go:141] libmachine: (addons-330674)       <model type='virtio'/>
	I0926 22:29:08.465639   10530 main.go:141] libmachine: (addons-330674)     </interface>
	I0926 22:29:08.465648   10530 main.go:141] libmachine: (addons-330674)     <interface type='network'>
	I0926 22:29:08.465665   10530 main.go:141] libmachine: (addons-330674)       <source network='default'/>
	I0926 22:29:08.465676   10530 main.go:141] libmachine: (addons-330674)       <model type='virtio'/>
	I0926 22:29:08.465681   10530 main.go:141] libmachine: (addons-330674)     </interface>
	I0926 22:29:08.465685   10530 main.go:141] libmachine: (addons-330674)     <serial type='pty'>
	I0926 22:29:08.465690   10530 main.go:141] libmachine: (addons-330674)       <target port='0'/>
	I0926 22:29:08.465696   10530 main.go:141] libmachine: (addons-330674)     </serial>
	I0926 22:29:08.465706   10530 main.go:141] libmachine: (addons-330674)     <console type='pty'>
	I0926 22:29:08.465714   10530 main.go:141] libmachine: (addons-330674)       <target type='serial' port='0'/>
	I0926 22:29:08.465740   10530 main.go:141] libmachine: (addons-330674)     </console>
	I0926 22:29:08.465754   10530 main.go:141] libmachine: (addons-330674)     <rng model='virtio'>
	I0926 22:29:08.465774   10530 main.go:141] libmachine: (addons-330674)       <backend model='random'>/dev/random</backend>
	I0926 22:29:08.465783   10530 main.go:141] libmachine: (addons-330674)     </rng>
	I0926 22:29:08.465790   10530 main.go:141] libmachine: (addons-330674)   </devices>
	I0926 22:29:08.465796   10530 main.go:141] libmachine: (addons-330674) </domain>
	I0926 22:29:08.465805   10530 main.go:141] libmachine: (addons-330674) 
	I0926 22:29:08.473977   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:84:c4:98 in network default
	I0926 22:29:08.474678   10530 main.go:141] libmachine: (addons-330674) starting domain...
	I0926 22:29:08.474698   10530 main.go:141] libmachine: (addons-330674) ensuring networks are active...
	I0926 22:29:08.474707   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:08.475451   10530 main.go:141] libmachine: (addons-330674) Ensuring network default is active
	I0926 22:29:08.475817   10530 main.go:141] libmachine: (addons-330674) Ensuring network mk-addons-330674 is active
	I0926 22:29:08.476435   10530 main.go:141] libmachine: (addons-330674) getting domain XML...
	I0926 22:29:08.477581   10530 main.go:141] libmachine: (addons-330674) DBG | starting domain XML:
	I0926 22:29:08.477607   10530 main.go:141] libmachine: (addons-330674) DBG | <domain type='kvm'>
	I0926 22:29:08.477626   10530 main.go:141] libmachine: (addons-330674) DBG |   <name>addons-330674</name>
	I0926 22:29:08.477633   10530 main.go:141] libmachine: (addons-330674) DBG |   <uuid>0270d5ce-774d-47cc-84b7-b73291b9eb86</uuid>
	I0926 22:29:08.477643   10530 main.go:141] libmachine: (addons-330674) DBG |   <memory unit='KiB'>4194304</memory>
	I0926 22:29:08.477648   10530 main.go:141] libmachine: (addons-330674) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I0926 22:29:08.477654   10530 main.go:141] libmachine: (addons-330674) DBG |   <vcpu placement='static'>2</vcpu>
	I0926 22:29:08.477661   10530 main.go:141] libmachine: (addons-330674) DBG |   <os>
	I0926 22:29:08.477680   10530 main.go:141] libmachine: (addons-330674) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0926 22:29:08.477689   10530 main.go:141] libmachine: (addons-330674) DBG |     <boot dev='cdrom'/>
	I0926 22:29:08.477699   10530 main.go:141] libmachine: (addons-330674) DBG |     <boot dev='hd'/>
	I0926 22:29:08.477710   10530 main.go:141] libmachine: (addons-330674) DBG |     <bootmenu enable='no'/>
	I0926 22:29:08.477719   10530 main.go:141] libmachine: (addons-330674) DBG |   </os>
	I0926 22:29:08.477724   10530 main.go:141] libmachine: (addons-330674) DBG |   <features>
	I0926 22:29:08.477729   10530 main.go:141] libmachine: (addons-330674) DBG |     <acpi/>
	I0926 22:29:08.477735   10530 main.go:141] libmachine: (addons-330674) DBG |     <apic/>
	I0926 22:29:08.477740   10530 main.go:141] libmachine: (addons-330674) DBG |     <pae/>
	I0926 22:29:08.477744   10530 main.go:141] libmachine: (addons-330674) DBG |   </features>
	I0926 22:29:08.477753   10530 main.go:141] libmachine: (addons-330674) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0926 22:29:08.477770   10530 main.go:141] libmachine: (addons-330674) DBG |   <clock offset='utc'/>
	I0926 22:29:08.477812   10530 main.go:141] libmachine: (addons-330674) DBG |   <on_poweroff>destroy</on_poweroff>
	I0926 22:29:08.477847   10530 main.go:141] libmachine: (addons-330674) DBG |   <on_reboot>restart</on_reboot>
	I0926 22:29:08.477862   10530 main.go:141] libmachine: (addons-330674) DBG |   <on_crash>destroy</on_crash>
	I0926 22:29:08.477872   10530 main.go:141] libmachine: (addons-330674) DBG |   <devices>
	I0926 22:29:08.477883   10530 main.go:141] libmachine: (addons-330674) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0926 22:29:08.477893   10530 main.go:141] libmachine: (addons-330674) DBG |     <disk type='file' device='cdrom'>
	I0926 22:29:08.477901   10530 main.go:141] libmachine: (addons-330674) DBG |       <driver name='qemu' type='raw'/>
	I0926 22:29:08.477910   10530 main.go:141] libmachine: (addons-330674) DBG |       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/boot2docker.iso'/>
	I0926 22:29:08.477939   10530 main.go:141] libmachine: (addons-330674) DBG |       <target dev='hdc' bus='scsi'/>
	I0926 22:29:08.477962   10530 main.go:141] libmachine: (addons-330674) DBG |       <readonly/>
	I0926 22:29:08.477976   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0926 22:29:08.477987   10530 main.go:141] libmachine: (addons-330674) DBG |     </disk>
	I0926 22:29:08.477997   10530 main.go:141] libmachine: (addons-330674) DBG |     <disk type='file' device='disk'>
	I0926 22:29:08.478009   10530 main.go:141] libmachine: (addons-330674) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0926 22:29:08.478027   10530 main.go:141] libmachine: (addons-330674) DBG |       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/addons-330674.rawdisk'/>
	I0926 22:29:08.478038   10530 main.go:141] libmachine: (addons-330674) DBG |       <target dev='hda' bus='virtio'/>
	I0926 22:29:08.478054   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0926 22:29:08.478064   10530 main.go:141] libmachine: (addons-330674) DBG |     </disk>
	I0926 22:29:08.478085   10530 main.go:141] libmachine: (addons-330674) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0926 22:29:08.478104   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0926 22:29:08.478118   10530 main.go:141] libmachine: (addons-330674) DBG |     </controller>
	I0926 22:29:08.478135   10530 main.go:141] libmachine: (addons-330674) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0926 22:29:08.478148   10530 main.go:141] libmachine: (addons-330674) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0926 22:29:08.478167   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0926 22:29:08.478178   10530 main.go:141] libmachine: (addons-330674) DBG |     </controller>
	I0926 22:29:08.478195   10530 main.go:141] libmachine: (addons-330674) DBG |     <interface type='network'>
	I0926 22:29:08.478213   10530 main.go:141] libmachine: (addons-330674) DBG |       <mac address='52:54:00:fe:3c:4a'/>
	I0926 22:29:08.478223   10530 main.go:141] libmachine: (addons-330674) DBG |       <source network='mk-addons-330674'/>
	I0926 22:29:08.478233   10530 main.go:141] libmachine: (addons-330674) DBG |       <model type='virtio'/>
	I0926 22:29:08.478243   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0926 22:29:08.478252   10530 main.go:141] libmachine: (addons-330674) DBG |     </interface>
	I0926 22:29:08.478264   10530 main.go:141] libmachine: (addons-330674) DBG |     <interface type='network'>
	I0926 22:29:08.478275   10530 main.go:141] libmachine: (addons-330674) DBG |       <mac address='52:54:00:84:c4:98'/>
	I0926 22:29:08.478286   10530 main.go:141] libmachine: (addons-330674) DBG |       <source network='default'/>
	I0926 22:29:08.478308   10530 main.go:141] libmachine: (addons-330674) DBG |       <model type='virtio'/>
	I0926 22:29:08.478322   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0926 22:29:08.478330   10530 main.go:141] libmachine: (addons-330674) DBG |     </interface>
	I0926 22:29:08.478350   10530 main.go:141] libmachine: (addons-330674) DBG |     <serial type='pty'>
	I0926 22:29:08.478362   10530 main.go:141] libmachine: (addons-330674) DBG |       <target type='isa-serial' port='0'>
	I0926 22:29:08.478459   10530 main.go:141] libmachine: (addons-330674) DBG |         <model name='isa-serial'/>
	I0926 22:29:08.478491   10530 main.go:141] libmachine: (addons-330674) DBG |       </target>
	I0926 22:29:08.478512   10530 main.go:141] libmachine: (addons-330674) DBG |     </serial>
	I0926 22:29:08.478522   10530 main.go:141] libmachine: (addons-330674) DBG |     <console type='pty'>
	I0926 22:29:08.478537   10530 main.go:141] libmachine: (addons-330674) DBG |       <target type='serial' port='0'/>
	I0926 22:29:08.478548   10530 main.go:141] libmachine: (addons-330674) DBG |     </console>
	I0926 22:29:08.478564   10530 main.go:141] libmachine: (addons-330674) DBG |     <input type='mouse' bus='ps2'/>
	I0926 22:29:08.478581   10530 main.go:141] libmachine: (addons-330674) DBG |     <input type='keyboard' bus='ps2'/>
	I0926 22:29:08.478595   10530 main.go:141] libmachine: (addons-330674) DBG |     <audio id='1' type='none'/>
	I0926 22:29:08.478607   10530 main.go:141] libmachine: (addons-330674) DBG |     <memballoon model='virtio'>
	I0926 22:29:08.478622   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0926 22:29:08.478634   10530 main.go:141] libmachine: (addons-330674) DBG |     </memballoon>
	I0926 22:29:08.478649   10530 main.go:141] libmachine: (addons-330674) DBG |     <rng model='virtio'>
	I0926 22:29:08.478659   10530 main.go:141] libmachine: (addons-330674) DBG |       <backend model='random'>/dev/random</backend>
	I0926 22:29:08.478667   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0926 22:29:08.478674   10530 main.go:141] libmachine: (addons-330674) DBG |     </rng>
	I0926 22:29:08.478679   10530 main.go:141] libmachine: (addons-330674) DBG |   </devices>
	I0926 22:29:08.478685   10530 main.go:141] libmachine: (addons-330674) DBG | </domain>
	I0926 22:29:08.478692   10530 main.go:141] libmachine: (addons-330674) DBG | 
	I0926 22:29:09.794414   10530 main.go:141] libmachine: (addons-330674) waiting for domain to start...
	I0926 22:29:09.795757   10530 main.go:141] libmachine: (addons-330674) domain is now running
	I0926 22:29:09.795779   10530 main.go:141] libmachine: (addons-330674) waiting for IP...
	I0926 22:29:09.796619   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:09.797072   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:09.797094   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:09.797358   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:09.797434   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:09.797363   10558 retry.go:31] will retry after 273.626577ms: waiting for domain to come up
	I0926 22:29:10.073299   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:10.073781   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:10.073821   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:10.074074   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:10.074127   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:10.074070   10558 retry.go:31] will retry after 328.642045ms: waiting for domain to come up
	I0926 22:29:10.404766   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:10.405330   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:10.405358   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:10.405650   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:10.405699   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:10.405633   10558 retry.go:31] will retry after 438.92032ms: waiting for domain to come up
	I0926 22:29:10.846204   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:10.846643   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:10.846672   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:10.846906   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:10.846933   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:10.846871   10558 retry.go:31] will retry after 558.153234ms: waiting for domain to come up
	I0926 22:29:11.406899   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:11.407422   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:11.407438   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:11.407834   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:11.407882   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:11.407800   10558 retry.go:31] will retry after 539.111569ms: waiting for domain to come up
	I0926 22:29:11.948608   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:11.949098   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:11.949119   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:11.949455   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:11.949481   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:11.949435   10558 retry.go:31] will retry after 832.890938ms: waiting for domain to come up
	I0926 22:29:12.784343   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:12.784868   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:12.784895   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:12.785122   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:12.785150   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:12.785094   10558 retry.go:31] will retry after 734.304778ms: waiting for domain to come up
	I0926 22:29:13.521093   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:13.521705   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:13.521742   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:13.521961   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:13.521985   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:13.521931   10558 retry.go:31] will retry after 1.380433504s: waiting for domain to come up
	I0926 22:29:14.904439   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:14.904924   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:14.904953   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:14.905190   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:14.905218   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:14.905169   10558 retry.go:31] will retry after 1.496759703s: waiting for domain to come up
	I0926 22:29:16.404048   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:16.404524   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:16.404544   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:16.404780   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:16.404815   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:16.404749   10558 retry.go:31] will retry after 2.080327572s: waiting for domain to come up
	I0926 22:29:18.486681   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:18.487121   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:18.487136   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:18.487537   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:18.487640   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:18.487542   10558 retry.go:31] will retry after 2.860875374s: waiting for domain to come up
	I0926 22:29:21.351807   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:21.352511   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:21.352546   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:21.352882   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:21.352912   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:21.352841   10558 retry.go:31] will retry after 3.24989466s: waiting for domain to come up
	I0926 22:29:24.605898   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:24.606496   10530 main.go:141] libmachine: (addons-330674) found domain IP: 192.168.39.36
	I0926 22:29:24.606514   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has current primary IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:24.606520   10530 main.go:141] libmachine: (addons-330674) reserving static IP address...
	I0926 22:29:24.607058   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find host DHCP lease matching {name: "addons-330674", mac: "52:54:00:fe:3c:4a", ip: "192.168.39.36"} in network mk-addons-330674
	I0926 22:29:24.801972   10530 main.go:141] libmachine: (addons-330674) DBG | Getting to WaitForSSH function...
	I0926 22:29:24.802012   10530 main.go:141] libmachine: (addons-330674) reserved static IP address 192.168.39.36 for domain addons-330674
	I0926 22:29:24.802021   10530 main.go:141] libmachine: (addons-330674) waiting for SSH...
	I0926 22:29:24.805483   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:24.805987   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:24.806013   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:24.806269   10530 main.go:141] libmachine: (addons-330674) DBG | Using SSH client type: external
	I0926 22:29:24.806295   10530 main.go:141] libmachine: (addons-330674) DBG | Using SSH private key: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa (-rw-------)
	I0926 22:29:24.806338   10530 main.go:141] libmachine: (addons-330674) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0926 22:29:24.806355   10530 main.go:141] libmachine: (addons-330674) DBG | About to run SSH command:
	I0926 22:29:24.806382   10530 main.go:141] libmachine: (addons-330674) DBG | exit 0
	I0926 22:29:24.945871   10530 main.go:141] libmachine: (addons-330674) DBG | SSH cmd err, output: <nil>: 
	I0926 22:29:24.946187   10530 main.go:141] libmachine: (addons-330674) domain creation complete
	I0926 22:29:24.946531   10530 main.go:141] libmachine: (addons-330674) Calling .GetConfigRaw
	I0926 22:29:24.947223   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:24.947466   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:24.947633   10530 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0926 22:29:24.947649   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:24.949328   10530 main.go:141] libmachine: Detecting operating system of created instance...
	I0926 22:29:24.949342   10530 main.go:141] libmachine: Waiting for SSH to be available...
	I0926 22:29:24.949347   10530 main.go:141] libmachine: Getting to WaitForSSH function...
	I0926 22:29:24.949352   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:24.952234   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:24.952698   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:24.952711   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:24.952971   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:24.953145   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:24.953333   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:24.953464   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:24.953611   10530 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:24.953903   10530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0926 22:29:24.953918   10530 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0926 22:29:25.060937   10530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 22:29:25.060966   10530 main.go:141] libmachine: Detecting the provisioner...
	I0926 22:29:25.060976   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:25.064297   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.064652   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.064684   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.064929   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:25.065163   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.065357   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.065558   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:25.065802   10530 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.066092   10530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0926 22:29:25.066109   10530 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0926 22:29:25.175605   10530 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0926 22:29:25.175676   10530 main.go:141] libmachine: found compatible host: buildroot
	I0926 22:29:25.175689   10530 main.go:141] libmachine: Provisioning with buildroot...
	I0926 22:29:25.175700   10530 main.go:141] libmachine: (addons-330674) Calling .GetMachineName
	I0926 22:29:25.175985   10530 buildroot.go:166] provisioning hostname "addons-330674"
	I0926 22:29:25.176011   10530 main.go:141] libmachine: (addons-330674) Calling .GetMachineName
	I0926 22:29:25.176150   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:25.179382   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.179854   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.179885   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.180043   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:25.180247   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.180432   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.180575   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:25.180767   10530 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.181010   10530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0926 22:29:25.181024   10530 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-330674 && echo "addons-330674" | sudo tee /etc/hostname
	I0926 22:29:25.307949   10530 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-330674
	
	I0926 22:29:25.307974   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:25.311584   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.312035   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.312067   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.312266   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:25.312427   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.312555   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.312671   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:25.312801   10530 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.313027   10530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0926 22:29:25.313044   10530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-330674' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-330674/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-330674' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 22:29:25.450755   10530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 22:29:25.450809   10530 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21642-6020/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-6020/.minikube}
	I0926 22:29:25.450872   10530 buildroot.go:174] setting up certificates
	I0926 22:29:25.450885   10530 provision.go:84] configureAuth start
	I0926 22:29:25.450905   10530 main.go:141] libmachine: (addons-330674) Calling .GetMachineName
	I0926 22:29:25.451192   10530 main.go:141] libmachine: (addons-330674) Calling .GetIP
	I0926 22:29:25.454688   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.455254   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.455279   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.455519   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:25.458753   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.459271   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.459303   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.459556   10530 provision.go:143] copyHostCerts
	I0926 22:29:25.459631   10530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem (1082 bytes)
	I0926 22:29:25.459785   10530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem (1123 bytes)
	I0926 22:29:25.459921   10530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem (1675 bytes)
	I0926 22:29:25.459995   10530 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem org=jenkins.addons-330674 san=[127.0.0.1 192.168.39.36 addons-330674 localhost minikube]
	I0926 22:29:25.636851   10530 provision.go:177] copyRemoteCerts
	I0926 22:29:25.636910   10530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 22:29:25.636931   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:25.640198   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.640611   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.640647   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.640899   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:25.641105   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.641276   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:25.641432   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:25.727740   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 22:29:25.759430   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 22:29:25.790642   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 22:29:25.824890   10530 provision.go:87] duration metric: took 373.989122ms to configureAuth
	I0926 22:29:25.824935   10530 buildroot.go:189] setting minikube options for container-runtime
	I0926 22:29:25.825088   10530 config.go:182] Loaded profile config "addons-330674": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:29:25.825156   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:25.828108   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.828481   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.828519   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.828682   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:25.828889   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.829082   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.829206   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:25.829377   10530 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.829561   10530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0926 22:29:25.829574   10530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0926 22:29:26.083637   10530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0926 22:29:26.083688   10530 main.go:141] libmachine: Checking connection to Docker...
	I0926 22:29:26.083699   10530 main.go:141] libmachine: (addons-330674) Calling .GetURL
	I0926 22:29:26.084980   10530 main.go:141] libmachine: (addons-330674) DBG | using libvirt version 8000000
	I0926 22:29:26.087617   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.088034   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.088058   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.088261   10530 main.go:141] libmachine: Docker is up and running!
	I0926 22:29:26.088277   10530 main.go:141] libmachine: Reticulating splines...
	I0926 22:29:26.088285   10530 client.go:171] duration metric: took 18.862290788s to LocalClient.Create
	I0926 22:29:26.088309   10530 start.go:167] duration metric: took 18.862351466s to libmachine.API.Create "addons-330674"
	I0926 22:29:26.088318   10530 start.go:293] postStartSetup for "addons-330674" (driver="kvm2")
	I0926 22:29:26.088328   10530 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 22:29:26.088344   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:26.088646   10530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 22:29:26.088676   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:26.091157   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.091558   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.091604   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.091759   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:26.091987   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:26.092140   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:26.092320   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:26.179094   10530 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 22:29:26.184339   10530 info.go:137] Remote host: Buildroot 2025.02
	I0926 22:29:26.184372   10530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-6020/.minikube/addons for local assets ...
	I0926 22:29:26.184463   10530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-6020/.minikube/files for local assets ...
	I0926 22:29:26.184504   10530 start.go:296] duration metric: took 96.180038ms for postStartSetup
	I0926 22:29:26.184545   10530 main.go:141] libmachine: (addons-330674) Calling .GetConfigRaw
	I0926 22:29:26.185197   10530 main.go:141] libmachine: (addons-330674) Calling .GetIP
	I0926 22:29:26.187971   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.188443   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.188476   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.188748   10530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/config.json ...
	I0926 22:29:26.188966   10530 start.go:128] duration metric: took 18.979703505s to createHost
	I0926 22:29:26.188989   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:26.191408   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.191793   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.191847   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.192051   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:26.192216   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:26.192328   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:26.192574   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:26.192739   10530 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:26.192982   10530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0926 22:29:26.192997   10530 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 22:29:26.302967   10530 main.go:141] libmachine: SSH cmd err, output: <nil>: 1758925766.258154674
	
	I0926 22:29:26.302991   10530 fix.go:216] guest clock: 1758925766.258154674
	I0926 22:29:26.302998   10530 fix.go:229] Guest: 2025-09-26 22:29:26.258154674 +0000 UTC Remote: 2025-09-26 22:29:26.188978954 +0000 UTC m=+19.093162175 (delta=69.17572ms)
	I0926 22:29:26.303017   10530 fix.go:200] guest clock delta is within tolerance: 69.17572ms
	I0926 22:29:26.303021   10530 start.go:83] releasing machines lock for "addons-330674", held for 19.093844163s
	I0926 22:29:26.303039   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:26.303314   10530 main.go:141] libmachine: (addons-330674) Calling .GetIP
	I0926 22:29:26.306248   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.306677   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.306699   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.306871   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:26.307420   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:26.307668   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:26.307796   10530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 22:29:26.307854   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:26.307908   10530 ssh_runner.go:195] Run: cat /version.json
	I0926 22:29:26.307928   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:26.311189   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.311234   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.311728   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.311762   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.311798   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.311816   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.312009   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:26.312028   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:26.312218   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:26.312225   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:26.312441   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:26.312444   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:26.312617   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:26.312624   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:26.424051   10530 ssh_runner.go:195] Run: systemctl --version
	I0926 22:29:26.430969   10530 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0926 22:29:26.610848   10530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 22:29:26.618574   10530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 22:29:26.618644   10530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 22:29:26.640335   10530 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 22:29:26.640361   10530 start.go:495] detecting cgroup driver to use...
	I0926 22:29:26.640424   10530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 22:29:26.662226   10530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 22:29:26.680146   10530 docker.go:218] disabling cri-docker service (if available) ...
	I0926 22:29:26.680210   10530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0926 22:29:26.699354   10530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0926 22:29:26.717303   10530 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0926 22:29:26.869422   10530 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0926 22:29:27.077850   10530 docker.go:234] disabling docker service ...
	I0926 22:29:27.077946   10530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0926 22:29:27.096325   10530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0926 22:29:27.112839   10530 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0926 22:29:27.280087   10530 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0926 22:29:27.428409   10530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 22:29:27.454379   10530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 22:29:27.481918   10530 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0926 22:29:27.481978   10530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.496018   10530 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0926 22:29:27.496545   10530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.511695   10530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.526954   10530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.542152   10530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 22:29:27.556957   10530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.570979   10530 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.593384   10530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.606999   10530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 22:29:27.619008   10530 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 22:29:27.619079   10530 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 22:29:27.643401   10530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 22:29:27.659682   10530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:27.806017   10530 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0926 22:29:27.921593   10530 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0926 22:29:27.921704   10530 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0926 22:29:27.927956   10530 start.go:563] Will wait 60s for crictl version
	I0926 22:29:27.928056   10530 ssh_runner.go:195] Run: which crictl
	I0926 22:29:27.932464   10530 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 22:29:27.976200   10530 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0926 22:29:27.976335   10530 ssh_runner.go:195] Run: crio --version
	I0926 22:29:28.008853   10530 ssh_runner.go:195] Run: crio --version
	I0926 22:29:28.043862   10530 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0926 22:29:28.045740   10530 main.go:141] libmachine: (addons-330674) Calling .GetIP
	I0926 22:29:28.048806   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:28.049367   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:28.049401   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:28.049696   10530 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0926 22:29:28.054603   10530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 22:29:28.071477   10530 kubeadm.go:883] updating cluster {Name:addons-330674 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-330
674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 22:29:28.071590   10530 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 22:29:28.071633   10530 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 22:29:28.118674   10530 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0926 22:29:28.118764   10530 ssh_runner.go:195] Run: which lz4
	I0926 22:29:28.123934   10530 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0926 22:29:28.129383   10530 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0926 22:29:28.129421   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0926 22:29:29.768442   10530 crio.go:462] duration metric: took 1.644542886s to copy over tarball
	I0926 22:29:29.768520   10530 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0926 22:29:31.498224   10530 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.729674115s)
	I0926 22:29:31.498261   10530 crio.go:469] duration metric: took 1.729788969s to extract the tarball
	I0926 22:29:31.498271   10530 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0926 22:29:31.542261   10530 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 22:29:31.589755   10530 crio.go:514] all images are preloaded for cri-o runtime.
	I0926 22:29:31.589778   10530 cache_images.go:85] Images are preloaded, skipping loading
	I0926 22:29:31.589786   10530 kubeadm.go:934] updating node { 192.168.39.36 8443 v1.34.0 crio true true} ...
	I0926 22:29:31.589917   10530 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-330674 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-330674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 22:29:31.590004   10530 ssh_runner.go:195] Run: crio config
	I0926 22:29:31.637842   10530 cni.go:84] Creating CNI manager for ""
	I0926 22:29:31.637869   10530 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 22:29:31.637886   10530 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 22:29:31.637913   10530 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.36 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-330674 NodeName:addons-330674 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 22:29:31.638060   10530 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-330674"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.36"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.36"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 22:29:31.638136   10530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0926 22:29:31.651088   10530 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 22:29:31.651173   10530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 22:29:31.664460   10530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0926 22:29:31.688820   10530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 22:29:31.711364   10530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0926 22:29:31.734280   10530 ssh_runner.go:195] Run: grep 192.168.39.36	control-plane.minikube.internal$ /etc/hosts
	I0926 22:29:31.738852   10530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 22:29:31.755229   10530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:31.902308   10530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 22:29:31.937034   10530 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674 for IP: 192.168.39.36
	I0926 22:29:31.937058   10530 certs.go:195] generating shared ca certs ...
	I0926 22:29:31.937074   10530 certs.go:227] acquiring lock for ca certs: {Name:mk9e164f84dd227cf84a459eec91beae2bb75a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:31.937207   10530 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key
	I0926 22:29:32.026590   10530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt ...
	I0926 22:29:32.026617   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt: {Name:mk1e3bf23e32e449f89f22a09284a0006a99cefd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.026782   10530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key ...
	I0926 22:29:32.026793   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key: {Name:mk5eaff0d17e330d6fd7ef6fcf7ad742525bef9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.026899   10530 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key
	I0926 22:29:32.787420   10530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.crt ...
	I0926 22:29:32.787450   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.crt: {Name:mk6c2cf5ab5d6decc42b76574fbbb2fa2a0d74f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.787609   10530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key ...
	I0926 22:29:32.787622   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key: {Name:mkbbce150377f831f3bce3eb30a4bb3f0e3a8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.787695   10530 certs.go:257] generating profile certs ...
	I0926 22:29:32.787750   10530 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.key
	I0926 22:29:32.787764   10530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt with IP's: []
	I0926 22:29:32.908998   10530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt ...
	I0926 22:29:32.909041   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: {Name:mk6078e9e1b406565a2c72ced7e3ab3a671f1de7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.909244   10530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.key ...
	I0926 22:29:32.909261   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.key: {Name:mkf3b0b0d969697c37ccf2b79cfe2d489e612622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.909377   10530 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.key.bda1d0ab
	I0926 22:29:32.909405   10530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.crt.bda1d0ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.36]
	I0926 22:29:33.576258   10530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.crt.bda1d0ab ...
	I0926 22:29:33.576288   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.crt.bda1d0ab: {Name:mk70a5fec9ce790e76bea656ec7f721eddde8def Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:33.576479   10530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.key.bda1d0ab ...
	I0926 22:29:33.576497   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.key.bda1d0ab: {Name:mkfc811bca2f58c6255301ef1bf7f7fc92f29309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:33.576622   10530 certs.go:382] copying /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.crt.bda1d0ab -> /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.crt
	I0926 22:29:33.576725   10530 certs.go:386] copying /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.key.bda1d0ab -> /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.key
	I0926 22:29:33.576779   10530 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.key
	I0926 22:29:33.576798   10530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.crt with IP's: []
	I0926 22:29:33.714042   10530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.crt ...
	I0926 22:29:33.714078   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.crt: {Name:mk2e196363dd00f5cf367b53bb1262ff8b58660e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:33.714261   10530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.key ...
	I0926 22:29:33.714278   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.key: {Name:mk6fa7164da45c401e6803ce35af819baa1796ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:33.714526   10530 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem (1679 bytes)
	I0926 22:29:33.714563   10530 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem (1082 bytes)
	I0926 22:29:33.714590   10530 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem (1123 bytes)
	I0926 22:29:33.714617   10530 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem (1675 bytes)
	I0926 22:29:33.715164   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 22:29:33.757024   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0926 22:29:33.801115   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 22:29:33.836953   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 22:29:33.869906   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0926 22:29:33.902538   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 22:29:33.933981   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 22:29:33.969510   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0926 22:29:34.000543   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 22:29:34.033373   10530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 22:29:34.056131   10530 ssh_runner.go:195] Run: openssl version
	I0926 22:29:34.062810   10530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 22:29:34.076566   10530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:34.082039   10530 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:34.082103   10530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:34.090282   10530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 22:29:34.104577   10530 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 22:29:34.110236   10530 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 22:29:34.110292   10530 kubeadm.go:400] StartCluster: {Name:addons-330674 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-330674
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:29:34.110386   10530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0926 22:29:34.110460   10530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0926 22:29:34.153972   10530 cri.go:89] found id: ""
	I0926 22:29:34.154038   10530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 22:29:34.166665   10530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 22:29:34.179555   10530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 22:29:34.192252   10530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 22:29:34.192272   10530 kubeadm.go:157] found existing configuration files:
	
	I0926 22:29:34.192315   10530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 22:29:34.204361   10530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 22:29:34.204419   10530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 22:29:34.216783   10530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 22:29:34.228359   10530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 22:29:34.228420   10530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 22:29:34.241418   10530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 22:29:34.253479   10530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 22:29:34.253551   10530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 22:29:34.266101   10530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 22:29:34.278300   10530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 22:29:34.278381   10530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 22:29:34.291142   10530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0926 22:29:34.464024   10530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 22:29:47.445637   10530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.0
	I0926 22:29:47.445747   10530 kubeadm.go:318] [preflight] Running pre-flight checks
	I0926 22:29:47.445868   10530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 22:29:47.445976   10530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 22:29:47.446109   10530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0926 22:29:47.446209   10530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 22:29:47.447948   10530 out.go:252]   - Generating certificates and keys ...
	I0926 22:29:47.448061   10530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I0926 22:29:47.448147   10530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I0926 22:29:47.448269   10530 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0926 22:29:47.448325   10530 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I0926 22:29:47.448386   10530 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I0926 22:29:47.448429   10530 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I0926 22:29:47.448504   10530 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I0926 22:29:47.448610   10530 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-330674 localhost] and IPs [192.168.39.36 127.0.0.1 ::1]
	I0926 22:29:47.448701   10530 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I0926 22:29:47.448884   10530 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-330674 localhost] and IPs [192.168.39.36 127.0.0.1 ::1]
	I0926 22:29:47.448982   10530 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0926 22:29:47.449075   10530 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I0926 22:29:47.449133   10530 kubeadm.go:318] [certs] Generating "sa" key and public key
	I0926 22:29:47.449183   10530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 22:29:47.449259   10530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 22:29:47.449346   10530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 22:29:47.449422   10530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 22:29:47.449517   10530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 22:29:47.449600   10530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 22:29:47.449705   10530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 22:29:47.449800   10530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 22:29:47.451527   10530 out.go:252]   - Booting up control plane ...
	I0926 22:29:47.451640   10530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 22:29:47.451715   10530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 22:29:47.451812   10530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 22:29:47.451951   10530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 22:29:47.452083   10530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0926 22:29:47.452213   10530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0926 22:29:47.452327   10530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 22:29:47.452402   10530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I0926 22:29:47.452577   10530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0926 22:29:47.452679   10530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0926 22:29:47.452730   10530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001754359s
	I0926 22:29:47.452819   10530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0926 22:29:47.452954   10530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.36:8443/livez
	I0926 22:29:47.453080   10530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0926 22:29:47.453186   10530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0926 22:29:47.453298   10530 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.979294458s
	I0926 22:29:47.453372   10530 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.933266488s
	I0926 22:29:47.453434   10530 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.002163771s
	I0926 22:29:47.453584   10530 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 22:29:47.453730   10530 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 22:29:47.453820   10530 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 22:29:47.454057   10530 kubeadm.go:318] [mark-control-plane] Marking the node addons-330674 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 22:29:47.454109   10530 kubeadm.go:318] [bootstrap-token] Using token: fhdqe8.jaemq9w7cxwr09ny
	I0926 22:29:47.456600   10530 out.go:252]   - Configuring RBAC rules ...
	I0926 22:29:47.456703   10530 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 22:29:47.456774   10530 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 22:29:47.456924   10530 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 22:29:47.457204   10530 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 22:29:47.457400   10530 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 22:29:47.457529   10530 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 22:29:47.457694   10530 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 22:29:47.457760   10530 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I0926 22:29:47.457852   10530 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I0926 22:29:47.457878   10530 kubeadm.go:318] 
	I0926 22:29:47.457966   10530 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I0926 22:29:47.457989   10530 kubeadm.go:318] 
	I0926 22:29:47.458096   10530 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I0926 22:29:47.458110   10530 kubeadm.go:318] 
	I0926 22:29:47.458158   10530 kubeadm.go:318]   mkdir -p $HOME/.kube
	I0926 22:29:47.458244   10530 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 22:29:47.458315   10530 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 22:29:47.458324   10530 kubeadm.go:318] 
	I0926 22:29:47.458397   10530 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I0926 22:29:47.458406   10530 kubeadm.go:318] 
	I0926 22:29:47.458474   10530 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 22:29:47.458483   10530 kubeadm.go:318] 
	I0926 22:29:47.458552   10530 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I0926 22:29:47.458681   10530 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 22:29:47.458813   10530 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 22:29:47.458841   10530 kubeadm.go:318] 
	I0926 22:29:47.458968   10530 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 22:29:47.459081   10530 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I0926 22:29:47.459092   10530 kubeadm.go:318] 
	I0926 22:29:47.459200   10530 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token fhdqe8.jaemq9w7cxwr09ny \
	I0926 22:29:47.459342   10530 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1bc065dc0287f5108511f75d77232285046ef3d632aca3b6b4eb77abcecaa58 \
	I0926 22:29:47.459397   10530 kubeadm.go:318] 	--control-plane 
	I0926 22:29:47.459414   10530 kubeadm.go:318] 
	I0926 22:29:47.459557   10530 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I0926 22:29:47.459575   10530 kubeadm.go:318] 
	I0926 22:29:47.459704   10530 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token fhdqe8.jaemq9w7cxwr09ny \
	I0926 22:29:47.459860   10530 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1bc065dc0287f5108511f75d77232285046ef3d632aca3b6b4eb77abcecaa58 
	I0926 22:29:47.459875   10530 cni.go:84] Creating CNI manager for ""
	I0926 22:29:47.459885   10530 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 22:29:47.462286   10530 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0926 22:29:47.463479   10530 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 22:29:47.480090   10530 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 22:29:47.505223   10530 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 22:29:47.505369   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:47.505369   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-330674 minikube.k8s.io/updated_at=2025_09_26T22_29_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47 minikube.k8s.io/name=addons-330674 minikube.k8s.io/primary=true
	I0926 22:29:47.547348   10530 ops.go:34] apiserver oom_adj: -16
	I0926 22:29:47.696459   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:48.197390   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:48.697112   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:49.197409   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:49.697305   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:50.196725   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:50.697377   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:51.197169   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:51.696547   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:52.197238   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:52.337983   10530 kubeadm.go:1113] duration metric: took 4.832674675s to wait for elevateKubeSystemPrivileges
	I0926 22:29:52.338028   10530 kubeadm.go:402] duration metric: took 18.227740002s to StartCluster
	I0926 22:29:52.338055   10530 settings.go:142] acquiring lock: {Name:mk8a46d5a99d51096f5a73696c8b5f570ce357f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:52.338211   10530 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 22:29:52.338922   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/kubeconfig: {Name:mkc92bf76d8ba21d0a2b0bb28107401b61549063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:52.339193   10530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0926 22:29:52.339222   10530 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 22:29:52.339287   10530 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0926 22:29:52.339397   10530 addons.go:69] Setting yakd=true in profile "addons-330674"
	I0926 22:29:52.339422   10530 addons.go:238] Setting addon yakd=true in "addons-330674"
	I0926 22:29:52.339438   10530 config.go:182] Loaded profile config "addons-330674": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:29:52.339450   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.339442   10530 addons.go:69] Setting inspektor-gadget=true in profile "addons-330674"
	I0926 22:29:52.339484   10530 addons.go:238] Setting addon inspektor-gadget=true in "addons-330674"
	I0926 22:29:52.339489   10530 addons.go:69] Setting registry-creds=true in profile "addons-330674"
	I0926 22:29:52.339500   10530 addons.go:238] Setting addon registry-creds=true in "addons-330674"
	I0926 22:29:52.339517   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.339530   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.339560   10530 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-330674"
	I0926 22:29:52.339588   10530 addons.go:69] Setting default-storageclass=true in profile "addons-330674"
	I0926 22:29:52.339641   10530 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-330674"
	I0926 22:29:52.339699   10530 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-330674"
	I0926 22:29:52.339712   10530 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-330674"
	I0926 22:29:52.339718   10530 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-330674"
	I0926 22:29:52.339748   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.339759   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.339933   10530 addons.go:69] Setting registry=true in profile "addons-330674"
	I0926 22:29:52.339940   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.339946   10530 addons.go:238] Setting addon registry=true in "addons-330674"
	I0926 22:29:52.339964   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.339980   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340110   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340158   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340197   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340205   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340206   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340225   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340231   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340240   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340291   10530 addons.go:69] Setting metrics-server=true in profile "addons-330674"
	I0926 22:29:52.340304   10530 addons.go:238] Setting addon metrics-server=true in "addons-330674"
	I0926 22:29:52.340326   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.340349   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340374   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340392   10530 addons.go:69] Setting cloud-spanner=true in profile "addons-330674"
	I0926 22:29:52.340443   10530 addons.go:238] Setting addon cloud-spanner=true in "addons-330674"
	I0926 22:29:52.340560   10530 addons.go:69] Setting volcano=true in profile "addons-330674"
	I0926 22:29:52.340574   10530 addons.go:238] Setting addon volcano=true in "addons-330674"
	I0926 22:29:52.340604   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.340716   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340742   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340788   10530 addons.go:69] Setting volumesnapshots=true in profile "addons-330674"
	I0926 22:29:52.340800   10530 addons.go:238] Setting addon volumesnapshots=true in "addons-330674"
	I0926 22:29:52.340924   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340944   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340986   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.341014   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.341238   10530 addons.go:69] Setting ingress=true in profile "addons-330674"
	I0926 22:29:52.341253   10530 addons.go:69] Setting storage-provisioner=true in profile "addons-330674"
	I0926 22:29:52.341266   10530 addons.go:238] Setting addon storage-provisioner=true in "addons-330674"
	I0926 22:29:52.341300   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.341348   10530 addons.go:238] Setting addon ingress=true in "addons-330674"
	I0926 22:29:52.341240   10530 addons.go:69] Setting ingress-dns=true in profile "addons-330674"
	I0926 22:29:52.341383   10530 addons.go:238] Setting addon ingress-dns=true in "addons-330674"
	I0926 22:29:52.341398   10530 addons.go:69] Setting gcp-auth=true in profile "addons-330674"
	I0926 22:29:52.341402   10530 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-330674"
	I0926 22:29:52.341416   10530 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-330674"
	I0926 22:29:52.341435   10530 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-330674"
	I0926 22:29:52.341449   10530 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-330674"
	I0926 22:29:52.341509   10530 mustload.go:65] Loading cluster: addons-330674
	I0926 22:29:52.341572   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.341666   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.342053   10530 config.go:182] Loaded profile config "addons-330674": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:29:52.342088   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.342112   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.342422   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.342457   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.342651   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.342763   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.342850   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.342877   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.343172   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.343225   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.343690   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.343759   10530 out.go:179] * Verifying Kubernetes components...
	I0926 22:29:52.345141   10530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:52.350321   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.350372   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.350322   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.350435   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.351572   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.351633   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.358613   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.358684   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.361429   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40113
	I0926 22:29:52.362310   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.363266   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.363291   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.363782   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.364414   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.364455   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.371191   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39567
	I0926 22:29:52.371861   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.372561   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.372652   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.375030   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.375692   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.375748   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.375980   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I0926 22:29:52.377892   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44149
	I0926 22:29:52.378610   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.379228   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.379277   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.380418   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.380730   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35473
	I0926 22:29:52.381210   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.381428   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.382129   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.382712   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.382732   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.383155   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.383734   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.383880   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.385421   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.386039   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.386056   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.386744   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.392554   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.392631   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.392957   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36869
	I0926 22:29:52.403136   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39041
	I0926 22:29:52.403357   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.404253   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.404397   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.404815   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.405017   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.406177   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37295
	I0926 22:29:52.407153   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.407267   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42091
	I0926 22:29:52.407493   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.408091   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.408111   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.408550   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.408710   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.408724   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.409166   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.409846   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.409891   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.410616   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.410655   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.410905   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45421
	I0926 22:29:52.411003   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37765
	I0926 22:29:52.411804   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.411878   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.413166   10530 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-330674"
	I0926 22:29:52.413212   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.413277   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.413290   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.413308   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.413357   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.413386   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40071
	I0926 22:29:52.413618   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.413655   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.413774   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.413937   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.413999   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.413926   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.414268   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46413
	I0926 22:29:52.414417   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.415084   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.415098   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.415167   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.415401   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0926 22:29:52.416240   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.416691   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.416706   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.416995   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.417284   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.417356   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.417400   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.417684   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.417702   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.417745   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.417759   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.417859   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.418249   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.418537   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.418587   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.418801   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.418846   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.418861   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.419363   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.419645   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.423740   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.424043   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43789
	I0926 22:29:52.424090   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.424576   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35581
	I0926 22:29:52.426334   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.426454   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.426607   10530 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0926 22:29:52.427471   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.427488   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.427592   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
	I0926 22:29:52.427968   10530 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0926 22:29:52.427972   10530 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0926 22:29:52.428007   10530 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0926 22:29:52.428043   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.428128   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.428207   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.428668   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.428709   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.429176   10530 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0926 22:29:52.429193   10530 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0926 22:29:52.429212   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.429849   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.430116   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.430384   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.430434   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.430456   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.430515   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.430528   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.430743   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.430835   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40085
	I0926 22:29:52.431092   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.432143   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.432185   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.432703   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42425
	I0926 22:29:52.433465   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.433715   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.434668   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.434685   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.434904   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.434924   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.435446   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.435463   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.435495   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39103
	I0926 22:29:52.435973   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.435917   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.437085   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.437270   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.437297   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.437337   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.437502   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.437868   10530 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0926 22:29:52.438175   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.438188   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.438549   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45599
	I0926 22:29:52.438682   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.440037   10530 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0926 22:29:52.440061   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0926 22:29:52.440079   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.442435   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.442474   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.442439   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.442677   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.443200   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.443252   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.444657   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.444806   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.446914   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.447017   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.447041   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.447190   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.447362   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.447543   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.447999   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.451596   10530 addons.go:238] Setting addon default-storageclass=true in "addons-330674"
	I0926 22:29:52.451644   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.452021   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.452143   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.452216   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I0926 22:29:52.452540   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.452557   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.454847   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.454885   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.454917   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.454957   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.455075   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.455146   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.458000   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.458096   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.458285   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.458304   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.458720   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.458993   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.459085   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.459239   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.463711   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.464398   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.465791   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36909
	I0926 22:29:52.466371   10530 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0926 22:29:52.466644   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0926 22:29:52.467050   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.467569   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0926 22:29:52.467743   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36549
	I0926 22:29:52.468044   10530 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0926 22:29:52.468068   10530 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0926 22:29:52.468090   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.468774   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.468790   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.469218   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.470226   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.470297   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0926 22:29:52.470392   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I0926 22:29:52.470551   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.471071   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.471372   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.472449   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.472652   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.472891   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.473131   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0926 22:29:52.473349   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.473363   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.473882   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.473998   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33037
	I0926 22:29:52.474193   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.474752   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.475909   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0926 22:29:52.477113   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.477095   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.477161   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.477185   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.478618   10530 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0926 22:29:52.480579   10530 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0926 22:29:52.480597   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0926 22:29:52.480661   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.480818   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.480951   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0926 22:29:52.481512   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34583
	I0926 22:29:52.481756   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.481991   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.482171   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.482530   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33609
	I0926 22:29:52.482811   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.483144   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I0926 22:29:52.483423   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.483436   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.483520   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.483963   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.484104   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.484127   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.484599   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.484633   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.484662   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.486038   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.486072   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I0926 22:29:52.486043   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.486184   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.486663   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.486711   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.486942   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.487245   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.487310   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.487460   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.487535   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.487597   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.487535   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.487638   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.487646   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.487661   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.487848   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.487893   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.488103   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.488104   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.488177   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.488361   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.488389   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0926 22:29:52.488389   10530 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0926 22:29:52.489656   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.490162   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.490179   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.490193   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40285
	I0926 22:29:52.490687   10530 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0926 22:29:52.490706   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0926 22:29:52.491492   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.491504   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.491505   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0926 22:29:52.491619   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.491782   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.492610   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.493077   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.493208   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.493847   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.494199   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.494517   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.494954   10530 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0926 22:29:52.495130   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0926 22:29:52.495634   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.496189   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.496270   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.496863   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.496884   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.497219   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.497702   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0926 22:29:52.497708   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.497731   10530 out.go:179]   - Using image docker.io/registry:3.0.0
	I0926 22:29:52.497749   10530 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0926 22:29:52.498358   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.498430   10530 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:29:52.498469   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.497975   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.498604   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:52.499305   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:52.498668   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32851
	I0926 22:29:52.499351   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.499605   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.499952   10530 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0926 22:29:52.499987   10530 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0926 22:29:52.500004   10530 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0926 22:29:52.500007   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.500012   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0926 22:29:52.500023   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.500044   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0926 22:29:52.500073   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:52.499594   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:52.500501   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:52.500514   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:52.500523   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:52.500129   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.500763   10530 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0926 22:29:52.501281   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:52.501333   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:52.501341   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:52.501398   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.501414   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.501434   10530 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 22:29:52.501736   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.501463   10530 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	W0926 22:29:52.501543   10530 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0926 22:29:52.502042   10530 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0926 22:29:52.502057   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0926 22:29:52.502060   10530 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:29:52.502073   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.502156   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0926 22:29:52.502598   10530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0926 22:29:52.502619   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.502406   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.502678   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.502927   10530 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 22:29:52.502977   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 22:29:52.503014   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.503203   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.503387   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.503874   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.504066   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.504321   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.504477   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.504534   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.505083   10530 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0926 22:29:52.505131   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0926 22:29:52.505159   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.505214   10530 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0926 22:29:52.505228   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0926 22:29:52.505243   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.510312   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.510352   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.510372   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.510540   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40237
	I0926 22:29:52.511814   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.512165   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.512498   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.512694   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.512861   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.512883   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.512940   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.513138   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.513164   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.513463   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.513728   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.513760   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.513773   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.513797   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.513819   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.513862   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.514034   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.514073   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.514293   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.514309   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.514314   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.514335   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.514397   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.514417   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.514466   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.514499   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.514543   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.514683   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.514718   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.514740   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.514803   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.514842   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.515096   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.515120   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.515158   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.515212   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.515314   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.515313   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.515369   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.515519   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.515528   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.515669   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.515801   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.515861   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.516001   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.516811   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.516866   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.517166   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.517319   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.517454   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.517568   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.518367   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.520659   10530 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0926 22:29:52.522403   10530 out.go:179]   - Using image docker.io/busybox:stable
	I0926 22:29:52.523881   10530 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0926 22:29:52.523952   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0926 22:29:52.523993   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.527749   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.528245   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.528291   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.528445   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.528629   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.528784   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.528962   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.529317   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41999
	I0926 22:29:52.529881   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.530387   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.530406   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.530792   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.530983   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.532944   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.533139   10530 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 22:29:52.533157   10530 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 22:29:52.533177   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.536538   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.537055   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.537081   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.537257   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.537421   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.537573   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.537707   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	W0926 22:29:52.825147   10530 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:42872->192.168.39.36:22: read: connection reset by peer
	I0926 22:29:52.825184   10530 retry.go:31] will retry after 357.513028ms: ssh: handshake failed: read tcp 192.168.39.1:42872->192.168.39.36:22: read: connection reset by peer
	I0926 22:29:53.505584   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0926 22:29:53.562710   10530 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:53.562734   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0926 22:29:53.582172   10530 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0926 22:29:53.582193   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0926 22:29:53.609139   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0926 22:29:53.610191   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0926 22:29:53.632195   10530 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0926 22:29:53.632223   10530 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0926 22:29:53.685673   10530 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0926 22:29:53.685699   10530 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0926 22:29:53.786927   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0926 22:29:53.923343   10530 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.584115291s)
	I0926 22:29:53.923377   10530 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.578070431s)
	I0926 22:29:53.923465   10530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 22:29:53.923530   10530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0926 22:29:53.925682   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0926 22:29:53.978909   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0926 22:29:53.978947   10530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0926 22:29:54.056339   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 22:29:54.057576   10530 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0926 22:29:54.057598   10530 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0926 22:29:54.073051   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 22:29:54.083110   10530 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0926 22:29:54.083142   10530 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0926 22:29:54.130144   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0926 22:29:54.233603   10530 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0926 22:29:54.233626   10530 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0926 22:29:54.338703   10530 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0926 22:29:54.338734   10530 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0926 22:29:54.348111   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:54.401871   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0926 22:29:54.427136   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0926 22:29:54.427192   10530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0926 22:29:54.477977   10530 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0926 22:29:54.478003   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0926 22:29:54.488580   10530 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0926 22:29:54.488611   10530 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0926 22:29:54.603741   10530 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0926 22:29:54.603771   10530 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0926 22:29:54.621914   10530 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0926 22:29:54.621950   10530 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0926 22:29:54.727962   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0926 22:29:54.727996   10530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0926 22:29:54.784061   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0926 22:29:54.798515   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0926 22:29:54.798543   10530 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0926 22:29:54.906797   10530 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0926 22:29:54.906820   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0926 22:29:54.981155   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0926 22:29:55.311775   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0926 22:29:55.311837   10530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0926 22:29:55.397530   10530 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:29:55.397562   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0926 22:29:55.499650   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0926 22:29:55.662109   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0926 22:29:55.662147   10530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0926 22:29:55.768710   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.263082445s)
	I0926 22:29:55.768768   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:55.768794   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:55.769138   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:55.769186   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:55.769194   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:55.769212   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:55.769221   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:55.769505   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:55.769523   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:55.769540   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:55.841693   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:29:56.282914   10530 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0926 22:29:56.282938   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0926 22:29:56.489532   10530 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0926 22:29:56.489560   10530 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0926 22:29:56.788009   10530 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0926 22:29:56.788039   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0926 22:29:57.398847   10530 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0926 22:29:57.398877   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0926 22:29:57.915310   10530 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0926 22:29:57.915334   10530 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0926 22:29:58.078605   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0926 22:29:58.892510   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.283337619s)
	I0926 22:29:58.892546   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.282325805s)
	I0926 22:29:58.892563   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.892578   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.892593   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.892605   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.892599   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.105645313s)
	I0926 22:29:58.892637   10530 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.969155254s)
	I0926 22:29:58.892657   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.892668   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.892696   10530 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.969142392s)
	I0926 22:29:58.892721   10530 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0926 22:29:58.892729   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.967021186s)
	I0926 22:29:58.892748   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.892757   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.892954   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.892998   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.893008   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.893017   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.893024   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.893125   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.893140   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.893142   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.893148   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.893155   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.893215   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.893234   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.893240   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.893247   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.893253   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.893298   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.893304   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.893311   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.893317   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.893610   10530 node_ready.go:35] waiting up to 6m0s for node "addons-330674" to be "Ready" ...
	I0926 22:29:58.895454   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.895477   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.895485   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.895492   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.895493   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.895517   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.895539   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.895560   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.895521   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.895572   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.895596   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.895613   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.929667   10530 node_ready.go:49] node "addons-330674" is "Ready"
	I0926 22:29:58.929709   10530 node_ready.go:38] duration metric: took 36.077495ms for node "addons-330674" to be "Ready" ...
	I0926 22:29:58.929729   10530 api_server.go:52] waiting for apiserver process to appear ...
	I0926 22:29:58.929805   10530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 22:29:59.313914   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.257543853s)
	I0926 22:29:59.313962   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.313971   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.313914   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.240823229s)
	I0926 22:29:59.314034   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.314042   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.314249   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.314274   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.314286   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.314294   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.314315   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:59.314350   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.314358   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.314365   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.314372   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.314617   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:59.314651   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.314659   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.314968   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:59.315002   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.315029   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.577160   10530 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-330674" context rescaled to 1 replicas
	I0926 22:29:59.623147   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.623171   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.623479   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:59.623536   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.623556   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.827523   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.697342297s)
	I0926 22:29:59.827568   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.827580   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.827837   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:59.827864   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.827879   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.827899   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.827910   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.828169   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.828186   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.961517   10530 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0926 22:29:59.961559   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:59.965295   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:59.965808   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:59.965856   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:59.966131   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:59.966338   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:59.966510   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:59.966670   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:30:00.039995   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:00.040024   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:00.040328   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:00.040346   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:00.125106   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.776957771s)
	W0926 22:30:00.125185   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:00.125204   10530 retry.go:31] will retry after 258.780744ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:00.324361   10530 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0926 22:30:00.385019   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:00.672566   10530 addons.go:238] Setting addon gcp-auth=true in "addons-330674"
	I0926 22:30:00.672636   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:30:00.673096   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:30:00.673137   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:30:00.687087   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
	I0926 22:30:00.687645   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:30:00.688187   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:30:00.688212   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:30:00.688516   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:30:00.689029   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:30:00.689057   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:30:00.702335   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36661
	I0926 22:30:00.702789   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:30:00.703222   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:30:00.703244   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:30:00.703562   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:30:00.703802   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:30:00.705815   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:30:00.706084   10530 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0926 22:30:00.706107   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:30:00.709280   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:30:00.709679   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:30:00.709711   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:30:00.709896   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:30:00.710096   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:30:00.710284   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:30:00.710443   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:30:02.404757   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.620648683s)
	I0926 22:30:02.404821   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.404859   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.404866   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.423678426s)
	I0926 22:30:02.404891   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.404914   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.404943   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.905250926s)
	I0926 22:30:02.404990   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.405013   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.405022   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.563299799s)
	W0926 22:30:02.405057   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0926 22:30:02.405082   10530 retry.go:31] will retry after 343.769978ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0926 22:30:02.405209   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.405221   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.405230   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.405237   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.405249   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.405258   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.405266   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.405272   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.405336   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:30:02.405337   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.405348   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.405356   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.405363   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.405530   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.405546   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.405653   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:30:02.405699   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.405707   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.405715   10530 addons.go:479] Verifying addon metrics-server=true in "addons-330674"
	I0926 22:30:02.405821   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:30:02.405866   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.405877   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.405883   10530 addons.go:479] Verifying addon registry=true in "addons-330674"
	I0926 22:30:02.407363   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.005456123s)
	I0926 22:30:02.407398   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.407407   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.407601   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.407618   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.407627   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.407635   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.407841   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.407857   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.407865   10530 addons.go:479] Verifying addon ingress=true in "addons-330674"
	I0926 22:30:02.408308   10530 out.go:179] * Verifying registry addon...
	I0926 22:30:02.408383   10530 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-330674 service yakd-dashboard -n yakd-dashboard
	
	I0926 22:30:02.409149   10530 out.go:179] * Verifying ingress addon...
	I0926 22:30:02.410466   10530 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0926 22:30:02.411443   10530 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0926 22:30:02.443532   10530 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0926 22:30:02.443553   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:02.450316   10530 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0926 22:30:02.450340   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:02.749736   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:30:02.955435   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:03.005505   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:03.290182   10530 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.360350718s)
	I0926 22:30:03.290221   10530 api_server.go:72] duration metric: took 10.950960949s to wait for apiserver process to appear ...
	I0926 22:30:03.290227   10530 api_server.go:88] waiting for apiserver healthz status ...
	I0926 22:30:03.290245   10530 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0926 22:30:03.291781   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.213122213s)
	I0926 22:30:03.291866   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:03.291892   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:03.292156   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:03.292173   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:03.292181   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:03.292189   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:03.292447   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:03.292465   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:03.292477   10530 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-330674"
	I0926 22:30:03.294391   10530 out.go:179] * Verifying csi-hostpath-driver addon...
	I0926 22:30:03.297053   10530 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0926 22:30:03.321731   10530 api_server.go:279] https://192.168.39.36:8443/healthz returned 200:
	ok
	I0926 22:30:03.330874   10530 api_server.go:141] control plane version: v1.34.0
	I0926 22:30:03.330909   10530 api_server.go:131] duration metric: took 40.674253ms to wait for apiserver health ...
	I0926 22:30:03.330920   10530 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 22:30:03.344023   10530 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0926 22:30:03.344056   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:03.403718   10530 system_pods.go:59] 20 kube-system pods found
	I0926 22:30:03.403767   10530 system_pods.go:61] "amd-gpu-device-plugin-cdb8s" [b42dc693-f8dc-488e-a6df-11603c5146c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:30:03.403775   10530 system_pods.go:61] "coredns-66bc5c9577-s7j79" [685dab00-8a34-4029-b32e-d39a08e61560] Running
	I0926 22:30:03.403782   10530 system_pods.go:61] "coredns-66bc5c9577-vcwdm" [6a3371fb-cab7-4a7e-8907-e11b45338ed0] Running
	I0926 22:30:03.403788   10530 system_pods.go:61] "csi-hostpath-attacher-0" [b261b610-5540-4a39-af53-0a988f5316a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0926 22:30:03.403793   10530 system_pods.go:61] "csi-hostpath-resizer-0" [cc7afc9a-219f-4080-9fba-b24d07fadc30] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0926 22:30:03.403801   10530 system_pods.go:61] "csi-hostpathplugin-mk92b" [98d7012b-de84-42ba-8ec1-3e1578c28cfd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0926 22:30:03.403805   10530 system_pods.go:61] "etcd-addons-330674" [1ada4ec6-135f-43be-bb60-af64ae2a0259] Running
	I0926 22:30:03.403809   10530 system_pods.go:61] "kube-apiserver-addons-330674" [85dd874b-a8d2-4a72-be1b-d09107cf46d1] Running
	I0926 22:30:03.403814   10530 system_pods.go:61] "kube-controller-manager-addons-330674" [e8c1d449-4682-421a-ac32-8cd0847bf13d] Running
	I0926 22:30:03.403839   10530 system_pods.go:61] "kube-ingress-dns-minikube" [d20fd4fa-1f62-423e-a836-f66893f73949] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0926 22:30:03.403855   10530 system_pods.go:61] "kube-proxy-lldr6" [e3500915-4e56-473c-8674-5ea502daaac6] Running
	I0926 22:30:03.403861   10530 system_pods.go:61] "kube-scheduler-addons-330674" [6f79c673-6fec-4e6d-a974-50991d63a4a3] Running
	I0926 22:30:03.403868   10530 system_pods.go:61] "metrics-server-85b7d694d7-lwlpp" [2b5d3bcf-5ffd-48cc-a6b5-c5c418e1348e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0926 22:30:03.403877   10530 system_pods.go:61] "nvidia-device-plugin-daemonset-8pbfv" [1929f235-8f94-4b86-ba34-fcdb88f8378b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:30:03.403885   10530 system_pods.go:61] "registry-66898fdd98-2t8mg" [c1b89f10-d5b6-445e-b282-034ab8eaa0ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0926 22:30:03.403892   10530 system_pods.go:61] "registry-creds-764b6fb674-hjbpz" [5f2c62bb-e38c-4e78-a9aa-995812c7d2ef] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0926 22:30:03.403899   10530 system_pods.go:61] "registry-proxy-2jz4s" [ad4c665f-afe2-4a63-95bb-447d8efe7a88] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 22:30:03.403905   10530 system_pods.go:61] "snapshot-controller-7d9fbc56b8-btkpl" [d9d7b772-8f8e-4095-aaa6-fc9b1d68c681] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:03.403911   10530 system_pods.go:61] "snapshot-controller-7d9fbc56b8-n4kkw" [86602a14-6de0-44fe-99ba-f64d79426345] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:03.403923   10530 system_pods.go:61] "storage-provisioner" [805513c7-5529-4f0e-bbe6-de0e474ba2ba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 22:30:03.403929   10530 system_pods.go:74] duration metric: took 73.003109ms to wait for pod list to return data ...
	I0926 22:30:03.403938   10530 default_sa.go:34] waiting for default service account to be created ...
	I0926 22:30:03.416293   10530 default_sa.go:45] found service account: "default"
	I0926 22:30:03.416322   10530 default_sa.go:55] duration metric: took 12.37763ms for default service account to be created ...
	I0926 22:30:03.416335   10530 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 22:30:03.420408   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:03.420640   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:03.435848   10530 system_pods.go:86] 20 kube-system pods found
	I0926 22:30:03.435885   10530 system_pods.go:89] "amd-gpu-device-plugin-cdb8s" [b42dc693-f8dc-488e-a6df-11603c5146c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:30:03.435896   10530 system_pods.go:89] "coredns-66bc5c9577-s7j79" [685dab00-8a34-4029-b32e-d39a08e61560] Running
	I0926 22:30:03.435903   10530 system_pods.go:89] "coredns-66bc5c9577-vcwdm" [6a3371fb-cab7-4a7e-8907-e11b45338ed0] Running
	I0926 22:30:03.435909   10530 system_pods.go:89] "csi-hostpath-attacher-0" [b261b610-5540-4a39-af53-0a988f5316a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0926 22:30:03.435920   10530 system_pods.go:89] "csi-hostpath-resizer-0" [cc7afc9a-219f-4080-9fba-b24d07fadc30] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0926 22:30:03.435926   10530 system_pods.go:89] "csi-hostpathplugin-mk92b" [98d7012b-de84-42ba-8ec1-3e1578c28cfd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0926 22:30:03.435933   10530 system_pods.go:89] "etcd-addons-330674" [1ada4ec6-135f-43be-bb60-af64ae2a0259] Running
	I0926 22:30:03.435938   10530 system_pods.go:89] "kube-apiserver-addons-330674" [85dd874b-a8d2-4a72-be1b-d09107cf46d1] Running
	I0926 22:30:03.435943   10530 system_pods.go:89] "kube-controller-manager-addons-330674" [e8c1d449-4682-421a-ac32-8cd0847bf13d] Running
	I0926 22:30:03.435948   10530 system_pods.go:89] "kube-ingress-dns-minikube" [d20fd4fa-1f62-423e-a836-f66893f73949] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0926 22:30:03.435961   10530 system_pods.go:89] "kube-proxy-lldr6" [e3500915-4e56-473c-8674-5ea502daaac6] Running
	I0926 22:30:03.435968   10530 system_pods.go:89] "kube-scheduler-addons-330674" [6f79c673-6fec-4e6d-a974-50991d63a4a3] Running
	I0926 22:30:03.435973   10530 system_pods.go:89] "metrics-server-85b7d694d7-lwlpp" [2b5d3bcf-5ffd-48cc-a6b5-c5c418e1348e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0926 22:30:03.435983   10530 system_pods.go:89] "nvidia-device-plugin-daemonset-8pbfv" [1929f235-8f94-4b86-ba34-fcdb88f8378b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:30:03.435990   10530 system_pods.go:89] "registry-66898fdd98-2t8mg" [c1b89f10-d5b6-445e-b282-034ab8eaa0ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0926 22:30:03.435995   10530 system_pods.go:89] "registry-creds-764b6fb674-hjbpz" [5f2c62bb-e38c-4e78-a9aa-995812c7d2ef] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0926 22:30:03.436004   10530 system_pods.go:89] "registry-proxy-2jz4s" [ad4c665f-afe2-4a63-95bb-447d8efe7a88] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 22:30:03.436011   10530 system_pods.go:89] "snapshot-controller-7d9fbc56b8-btkpl" [d9d7b772-8f8e-4095-aaa6-fc9b1d68c681] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:03.436030   10530 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n4kkw" [86602a14-6de0-44fe-99ba-f64d79426345] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:03.436040   10530 system_pods.go:89] "storage-provisioner" [805513c7-5529-4f0e-bbe6-de0e474ba2ba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 22:30:03.436051   10530 system_pods.go:126] duration metric: took 19.710312ms to wait for k8s-apps to be running ...
	I0926 22:30:03.436063   10530 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 22:30:03.436116   10530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 22:30:03.805385   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:03.933120   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:03.935740   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:04.103360   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.718280199s)
	W0926 22:30:04.103409   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:04.103441   10530 retry.go:31] will retry after 415.010612ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:04.103441   10530 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.397332098s)
	I0926 22:30:04.105638   10530 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:30:04.107144   10530 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0926 22:30:04.108740   10530 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0926 22:30:04.108757   10530 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0926 22:30:04.204504   10530 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0926 22:30:04.204558   10530 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0926 22:30:04.266226   10530 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0926 22:30:04.266270   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0926 22:30:04.318135   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:04.326300   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0926 22:30:04.425264   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:04.425430   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:04.519163   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:04.804743   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:04.918462   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:04.921343   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:05.305855   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:05.419096   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:05.420385   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:05.480378   10530 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.044238076s)
	I0926 22:30:05.480434   10530 system_svc.go:56] duration metric: took 2.044366858s WaitForService to wait for kubelet
	I0926 22:30:05.480445   10530 kubeadm.go:586] duration metric: took 13.141186204s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 22:30:05.480467   10530 node_conditions.go:102] verifying NodePressure condition ...
	I0926 22:30:05.480379   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.730593729s)
	I0926 22:30:05.480567   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:05.480587   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:05.480910   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:05.480930   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:05.480948   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:05.480958   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:05.481297   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:05.481319   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:05.481322   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:30:05.490128   10530 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 22:30:05.490159   10530 node_conditions.go:123] node cpu capacity is 2
	I0926 22:30:05.490173   10530 node_conditions.go:105] duration metric: took 9.698866ms to run NodePressure ...
	I0926 22:30:05.490188   10530 start.go:241] waiting for startup goroutines ...
	I0926 22:30:05.823251   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:05.995165   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:05.995238   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:06.168992   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.842648363s)
	I0926 22:30:06.169046   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:06.169088   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:06.169430   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:06.169452   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:06.169462   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:06.169470   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:06.169730   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:06.169745   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:06.169769   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:30:06.170927   10530 addons.go:479] Verifying addon gcp-auth=true in "addons-330674"
	I0926 22:30:06.172988   10530 out.go:179] * Verifying gcp-auth addon...
	I0926 22:30:06.174897   10530 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0926 22:30:06.212287   10530 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0926 22:30:06.212317   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:06.312659   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:06.419336   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:06.421545   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:06.682289   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:06.707555   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.188348588s)
	W0926 22:30:06.707615   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:06.707638   10530 retry.go:31] will retry after 690.015659ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:06.806300   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:06.928806   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:06.928935   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:07.182496   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:07.305123   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:07.398719   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:07.423608   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:07.424145   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:07.683323   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:07.805352   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:07.926676   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:07.926821   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:08.183118   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:08.305133   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:08.418514   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:08.420565   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:08.679221   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:08.802855   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:08.849509   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.450746787s)
	W0926 22:30:08.849558   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:08.849579   10530 retry.go:31] will retry after 720.875973ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:08.914397   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:08.916076   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:09.178734   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:09.301290   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:09.420684   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:09.421209   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:09.571363   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:09.684948   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:09.814626   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:09.920020   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:09.920521   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.184424   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:10.302867   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:10.415872   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.418972   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:10.681185   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:10.802134   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:10.816960   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.245551165s)
	W0926 22:30:10.817021   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:10.817043   10530 retry.go:31] will retry after 1.516018438s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:10.916672   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.920270   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.178990   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:11.306805   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:11.418242   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.419600   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:11.680889   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:11.804313   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:11.914838   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.918376   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:12.180561   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:12.301512   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:12.333663   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:12.415805   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:12.419363   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:12.682335   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:12.804222   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:12.918788   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:12.919995   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:13.180331   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:13.305340   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:13.415577   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:13.416349   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:13.683699   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:13.805707   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:13.813715   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.480003432s)
	W0926 22:30:13.813753   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:13.813774   10530 retry.go:31] will retry after 1.257586739s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:13.921625   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:13.925319   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:14.180615   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:14.305510   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:14.415983   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:14.416424   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:14.679635   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:14.807576   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:14.915558   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:14.917303   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:15.071517   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:15.181159   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:15.306945   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:15.418630   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:15.418800   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:15.679147   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:15.893712   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:15.916744   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:15.917096   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:16.185591   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:16.304040   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:16.326267   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.254707359s)
	W0926 22:30:16.326313   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:16.326336   10530 retry.go:31] will retry after 2.377890696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:16.416481   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:16.419518   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:16.681550   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:16.803052   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:16.918664   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:16.919009   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:17.182452   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:17.302075   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:17.413448   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:17.417362   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.047202   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:18.047385   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:18.047552   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.048184   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:18.179560   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:18.303903   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:18.418028   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.421419   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:18.680067   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:18.705254   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:18.801213   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:18.914739   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.917654   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:19.179344   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:19.303239   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:19.418321   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:19.418678   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:19.679164   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:19.806674   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:19.908858   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.203561998s)
	W0926 22:30:19.908904   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:19.908926   10530 retry.go:31] will retry after 4.32939773s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:19.917643   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:19.919920   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:20.581572   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:20.582550   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:20.583652   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:20.584766   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:20.679458   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:20.802582   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:20.916995   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:20.918666   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:21.180913   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:21.332135   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:21.417484   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:21.417798   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:21.679247   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:21.801601   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:21.921505   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:21.923595   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:22.206659   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:22.303078   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:22.415068   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:22.416432   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:22.682206   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:22.802352   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:22.916004   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:22.916426   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:23.178440   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:23.302488   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:23.416760   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:23.417074   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:23.678471   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:23.801463   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:23.914659   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:23.915754   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:24.183326   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:24.239507   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:24.305343   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:24.420822   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:24.422445   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:24.681588   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:24.803334   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:24.920591   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:24.921194   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.181354   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:25.300531   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:25.414416   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:25.415291   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.431734   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.19217319s)
	W0926 22:30:25.431806   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:25.431843   10530 retry.go:31] will retry after 4.927424107s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:25.679778   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:25.804725   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:25.917163   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.917189   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:26.181015   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:26.302673   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:26.415255   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:26.416011   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:26.932748   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:26.938776   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:26.939199   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:26.939659   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:27.179484   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:27.300382   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:27.413855   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:27.416495   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:27.679241   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:27.803067   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:27.915766   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:27.916504   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:28.179926   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:28.303820   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:28.417009   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:28.417362   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:28.680438   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:28.803693   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:28.913738   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:28.917580   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:29.183260   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:29.305035   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:29.415252   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:29.421557   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:29.681884   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:29.801694   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:29.917990   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:29.920375   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:30.183992   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:30.303403   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:30.359440   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:30.416736   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:30.418359   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:30.679889   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:30.802012   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:30.916345   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:30.916485   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:31.151193   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:31.151227   10530 retry.go:31] will retry after 11.763207551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:31.179522   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:31.300872   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:31.417428   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:31.421535   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:31.683158   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:31.804166   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:31.917250   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:31.919814   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:32.180485   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:32.301448   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:32.414799   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:32.416565   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:32.682199   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:32.802085   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:32.918254   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:32.920864   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:33.180283   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:33.302044   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:33.418195   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:33.420283   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:33.682205   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:33.802900   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:33.915518   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:33.917060   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:34.183894   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:34.302424   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:34.418071   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:34.418937   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:34.681883   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:34.802739   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:34.913927   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:34.918879   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:35.348473   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:35.348627   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:35.447966   10530 kapi.go:107] duration metric: took 33.037496042s to wait for kubernetes.io/minikube-addons=registry ...
	I0926 22:30:35.448199   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:35.683550   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:35.802457   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:35.919287   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:36.178520   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:36.307082   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:36.415664   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:36.678900   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:36.803136   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:36.917411   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:37.185045   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:37.305913   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:37.630651   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:37.685375   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:37.802798   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:37.916719   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:38.181102   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:38.303094   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:38.417302   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:38.678435   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:38.801995   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:38.915065   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:39.178903   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:39.304329   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:39.416763   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:39.680033   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:39.801768   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:39.920400   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:40.180647   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:40.304347   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:40.416722   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:40.680569   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:40.803376   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:40.917005   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:41.180461   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:41.304146   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:41.417255   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:41.886447   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:41.888300   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:41.917365   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:42.180186   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:42.301635   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:42.419758   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:42.684808   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:42.804001   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:42.915430   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:42.923040   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:43.179997   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:43.306383   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:43.417022   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:43.682482   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:43.804992   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:43.922647   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:44.178880   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:44.240115   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.324639979s)
	W0926 22:30:44.240173   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:44.240195   10530 retry.go:31] will retry after 8.858097577s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:44.303169   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:44.418771   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:44.679551   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:44.801684   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:44.916013   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:45.179885   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:45.304426   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:45.428618   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:45.683426   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:45.810100   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:45.925137   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:46.179160   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:46.304364   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:46.448027   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:46.680201   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:46.805269   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:46.918049   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:47.181812   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:47.303700   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:47.415733   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:47.678623   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:47.808820   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:47.924088   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:48.180112   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:48.303763   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:48.424961   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:48.683665   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:48.803327   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:48.916118   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:49.178848   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:49.307797   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:49.416656   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:49.678851   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:49.802681   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:49.915714   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:50.180965   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:50.302266   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:50.415480   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:50.678616   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:50.804349   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:50.915318   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:51.184191   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:51.304048   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:51.418336   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:51.681435   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:51.804006   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:51.920620   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:52.183727   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:52.302182   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:52.416612   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:52.680540   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:52.804272   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:52.916855   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:53.099065   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:53.180672   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:53.305123   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:53.420113   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:53.685179   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:53.804757   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:53.917568   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:54.182857   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:54.302373   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:54.363811   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.264695675s)
	W0926 22:30:54.363881   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:54.363905   10530 retry.go:31] will retry after 15.55536091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:54.417539   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:54.681049   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:54.805028   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:54.915452   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:55.179696   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:55.301978   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:55.415794   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:55.679572   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:55.819347   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:55.918310   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:56.198401   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:56.304413   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:56.419426   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:56.680091   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:56.801779   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:56.918752   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:57.179612   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:57.301230   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:57.417433   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:57.681559   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:57.804383   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:57.917958   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:58.184656   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:58.306258   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:58.417260   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:58.698392   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:58.807597   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:58.915960   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:59.185696   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:59.303096   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:59.416022   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:59.683432   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:59.802671   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:59.916001   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:00.181296   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:00.301887   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:00.427020   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:00.678513   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:00.801870   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:00.920491   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:01.185028   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:01.304169   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:01.418926   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:01.685221   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:01.802805   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:01.915852   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:02.180224   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:02.310447   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:02.417773   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:02.684271   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:02.802160   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:02.917181   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:03.179667   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:03.305578   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:03.421443   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:03.679070   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:03.801937   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:03.915703   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:04.183143   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:04.303032   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:04.416888   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:04.681175   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:04.804024   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:04.931508   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:05.179817   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:05.303489   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:05.417042   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:05.679451   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:05.802120   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:05.918159   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:06.182494   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:06.401415   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:06.422627   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:06.679776   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:06.809902   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:06.918997   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:07.181491   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:07.302724   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:07.420205   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:07.680745   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:07.802430   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:07.917742   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:08.180112   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:08.301417   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:08.419665   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:08.679714   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:08.804244   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:08.918524   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:09.179876   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:09.302541   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:09.416678   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:09.680295   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:09.803785   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:09.916555   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:09.919538   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:31:10.182518   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:10.302156   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:10.417516   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:10.681589   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:10.803589   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:10.918491   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:11.184181   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:11.304515   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:11.419292   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:11.446493   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.526922683s)
	W0926 22:31:11.446528   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:31:11.446544   10530 retry.go:31] will retry after 18.44611829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:31:11.678436   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:11.807747   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:11.919354   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:12.183063   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:12.311693   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:12.420067   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:12.680144   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:12.802750   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:12.915380   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:13.178429   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:13.304983   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:13.473623   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:13.681102   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:13.802854   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:13.917953   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:14.183739   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:14.306018   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:14.646952   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:14.685595   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:14.802999   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:14.921890   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:15.181084   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:15.302376   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:15.419849   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:15.683746   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:16.022493   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:16.022587   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:16.182478   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:16.302322   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:16.418598   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:16.679927   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:16.808355   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:16.925473   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:17.186059   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:17.302020   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:17.427294   10530 kapi.go:107] duration metric: took 1m15.015851492s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0926 22:31:17.679432   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:17.802560   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:18.182037   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:18.300453   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:18.682444   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:18.804335   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:19.183050   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:19.303647   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:19.682844   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:19.801755   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:20.180116   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:20.303024   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:20.683340   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:20.802598   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:21.185647   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:21.303560   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:21.682723   10530 kapi.go:107] duration metric: took 1m15.507819233s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0926 22:31:21.684569   10530 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-330674 cluster.
	I0926 22:31:21.685984   10530 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0926 22:31:21.687420   10530 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0926 22:31:21.803101   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:22.301291   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:22.802797   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:23.304046   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:23.801813   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:24.302450   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:24.802449   10530 kapi.go:107] duration metric: took 1m21.505395208s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0926 22:31:29.894273   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0926 22:31:30.655606   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:31:30.655687   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:31:30.655705   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:31:30.655977   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:31:30.655997   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:31:30.656006   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:31:30.656013   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:31:30.656033   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:31:30.656218   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:31:30.656238   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:31:30.656214   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	W0926 22:31:30.656316   10530 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0926 22:31:30.659216   10530 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, cloud-spanner, registry-creds, ingress-dns, storage-provisioner, default-storageclass, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0926 22:31:30.660657   10530 addons.go:514] duration metric: took 1m38.321386508s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin cloud-spanner registry-creds ingress-dns storage-provisioner default-storageclass storage-provisioner-rancher metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0926 22:31:30.660695   10530 start.go:246] waiting for cluster config update ...
	I0926 22:31:30.660716   10530 start.go:255] writing updated cluster config ...
	I0926 22:31:30.660982   10530 ssh_runner.go:195] Run: rm -f paused
	I0926 22:31:30.667682   10530 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 22:31:30.672263   10530 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vcwdm" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:30.678377   10530 pod_ready.go:94] pod "coredns-66bc5c9577-vcwdm" is "Ready"
	I0926 22:31:30.678398   10530 pod_ready.go:86] duration metric: took 6.113857ms for pod "coredns-66bc5c9577-vcwdm" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:30.681561   10530 pod_ready.go:83] waiting for pod "etcd-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:30.687574   10530 pod_ready.go:94] pod "etcd-addons-330674" is "Ready"
	I0926 22:31:30.687599   10530 pod_ready.go:86] duration metric: took 6.011516ms for pod "etcd-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:30.690685   10530 pod_ready.go:83] waiting for pod "kube-apiserver-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:30.695334   10530 pod_ready.go:94] pod "kube-apiserver-addons-330674" is "Ready"
	I0926 22:31:30.695353   10530 pod_ready.go:86] duration metric: took 4.646437ms for pod "kube-apiserver-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:30.697972   10530 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:31.073074   10530 pod_ready.go:94] pod "kube-controller-manager-addons-330674" is "Ready"
	I0926 22:31:31.073098   10530 pod_ready.go:86] duration metric: took 375.106541ms for pod "kube-controller-manager-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:31.272175   10530 pod_ready.go:83] waiting for pod "kube-proxy-lldr6" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:31.672837   10530 pod_ready.go:94] pod "kube-proxy-lldr6" is "Ready"
	I0926 22:31:31.672859   10530 pod_ready.go:86] duration metric: took 400.65065ms for pod "kube-proxy-lldr6" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:31.872942   10530 pod_ready.go:83] waiting for pod "kube-scheduler-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:32.272335   10530 pod_ready.go:94] pod "kube-scheduler-addons-330674" is "Ready"
	I0926 22:31:32.272368   10530 pod_ready.go:86] duration metric: took 399.399542ms for pod "kube-scheduler-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:32.272382   10530 pod_ready.go:40] duration metric: took 1.604672258s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 22:31:32.319206   10530 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 22:31:32.320852   10530 out.go:179] * Done! kubectl is now configured to use "addons-330674" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.167652892Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758926408167625460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519332,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a73405c-c5ff-4391-9107-55f4557ed6b9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.168369965Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6af60d32-19ba-4094-95cb-619543e13e7a name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.168451490Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6af60d32-19ba-4094-95cb-619543e13e7a name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.168759322Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6b78ecb5174fb2b7f86cd2c4e767d94697649a74394ddfbca2309130d6eaa8c,PodSandboxId:b3f170d8fa06d1d92adb39a7915d41ba2dd5740703a6e0c23e6edf4dbe1e00e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758925895547677835,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 445fcb70-08b0-49c8-b65c-eda21a3d6feb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041e5164edc9638ac7a5e3fb9b42dc3d246076e9c3024a78f6c14deca9aadc24,PodSandboxId:8725b0863596a05617b28f599b741b374f47553116849600ccb62872a79198c1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1758925876391663761,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-kbqsf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9dd82dc5-ecb0-431a-8606-e0b251a33909,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:051406a4cc7e967f8fd03e63ab8bb5dbef64f6fb8e2ca56e77fac0d9cde5d0b0,PodSandboxId:1a394bb7ee033d4fc2928bdf9c7146d58a16612bf1d25551c70d873eb6356748,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758925872035993264,
Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vpbtt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae336bc2-9fe3-4fb6-993b-62ec6c833145,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53bb00230c0915fb72c8594f89b327ea93de60b21f74bb8bbea98be7af7d5c0,PodSandboxId:b1250bf09824f123677325475218e4cf4789bc966b6da72e7387e8d0c114dee5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_E
XITED,CreatedAt:1758925859611809924,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2xzt8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e1bbf119-387c-430c-b64f-3412376a93d5,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a156c91664dbba69de3c62daeddadd17f3ea62e719400eb7575de0edc7b237,PodSandboxId:9afc50bd4655284b9f7792b29a82a64de5dedc47aca1be7f59ac0cdba9596cc2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc
7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1758925851071855426,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c5fsh,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 1d4706ed-d612-42b6-8ce7-1c3b53174964,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d1b73931795de08ae3fe25c28a68cc48cf2a1f358986388b7e68cef1254a49,PodSandboxId:4a03161ad649c86ef5f6fababc00d5c61e2b112f0745952010807bb23df9c76b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1758925842020813340,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d20fd4fa-1f62-423e-a836-f66893f73949,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ce52a782ec641bc3054d3b7fecfbf5015f0255a42d1d8b2817d0e21a3cb64f,PodSandboxId:164540b56841d45bbea8b25fd820262a02ff3dc521d2483ef4d9fa6bf455840f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:
docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1758925807369906434,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cdb8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42dc693-f8dc-488e-a6df-11603c5146c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcddaa36c6f8e064b9e65b380137f789e7379644bdf02c4ce91a8481abe8aed,PodSandboxId:6f9b04761677876630b638de388847d0cd9b141a8301620dc9f0f8995da05593,Metadata:&ContainerMetadata{Name:storage-provision
er,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758925807170331625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805513c7-5529-4f0e-bbe6-de0e474ba2ba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d80adcca025aaef75e6e06f57e8799486cfe77e98b93797c20bec0f4dab49ed,PodSandboxId:4a821382e4a7e40f22aaab81e8bb96cf30745916ba0c162f9efbaed010997c81,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&Im
ageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758925793811470387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vcwdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a3371fb-cab7-4a7e-8907-e11b45338ed0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c093002446e01a4b5ed0e5bf25dd5e04c44bbdf58a99648d2615cbc9a8df29,PodSandboxId:e6bd3271dd6ac5f8ce745e3c6d5ed6c1c8b6e94486e2549e260561de7a8d9694,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758925792110209484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lldr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3500915-4e56-473c-8674-5ea502daaac6,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d546b62051d6981b70d7a64cf0bb498a74b8a5f034aea3d6ca372b748273dd08,PodSandboxId:423d307a9a2ff59da5cb2aee768cb0e27b277a107aac0035e742bc3536de2a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758925780689458095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b3ab0a34880a8a828bd4ec7b048073,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c14c61340bfb60319237ab9cdb7743d04777d104299829a2666627dc25b549ce,PodSandboxId:f8b0370a64577d26d2005616cef004867bab0ed7612bdb68674b97c0cd4ddc44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758925780691387127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ebb9a6bc31069e8c997f3161744cee,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name
\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96b63fa3232c4e36cd45a617624415a34216ab78bd0288ce20498e29c613de46,PodSandboxId:00739f8fdf1571de344a91ed170311f30ce26aae40b8fd9e24b9f24e7340f067,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758925780648843877,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7596254403ac958c412ddaf08adf07c0,},An
notations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71804cd6c0cd12a68a0fcc99788afd0951532dc500dcac6297763fb881c5193,PodSandboxId:a5800cbdc6985f866308b5ec875d6185a6c0c7223e4b69157d6014fad076bb3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758925780660818663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-3306
74,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd7d325e4c1d60f88ed2ac4cd01e5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6af60d32-19ba-4094-95cb-619543e13e7a name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.210950106Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc6aa06e-8431-4c13-9866-90bd83772276 name=/runtime.v1.RuntimeService/Version
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.211055366Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc6aa06e-8431-4c13-9866-90bd83772276 name=/runtime.v1.RuntimeService/Version
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.213558596Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9de1d2fa-ad4e-4eca-b25a-d4cbf314ea75 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.214870124Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758926408214832660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519332,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9de1d2fa-ad4e-4eca-b25a-d4cbf314ea75 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.216039684Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b706bb6c-9065-4f05-81fd-c30b67d0aae1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.216270980Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b706bb6c-9065-4f05-81fd-c30b67d0aae1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.217214219Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6b78ecb5174fb2b7f86cd2c4e767d94697649a74394ddfbca2309130d6eaa8c,PodSandboxId:b3f170d8fa06d1d92adb39a7915d41ba2dd5740703a6e0c23e6edf4dbe1e00e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758925895547677835,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 445fcb70-08b0-49c8-b65c-eda21a3d6feb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041e5164edc9638ac7a5e3fb9b42dc3d246076e9c3024a78f6c14deca9aadc24,PodSandboxId:8725b0863596a05617b28f599b741b374f47553116849600ccb62872a79198c1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1758925876391663761,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-kbqsf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9dd82dc5-ecb0-431a-8606-e0b251a33909,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:051406a4cc7e967f8fd03e63ab8bb5dbef64f6fb8e2ca56e77fac0d9cde5d0b0,PodSandboxId:1a394bb7ee033d4fc2928bdf9c7146d58a16612bf1d25551c70d873eb6356748,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758925872035993264,
Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vpbtt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae336bc2-9fe3-4fb6-993b-62ec6c833145,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53bb00230c0915fb72c8594f89b327ea93de60b21f74bb8bbea98be7af7d5c0,PodSandboxId:b1250bf09824f123677325475218e4cf4789bc966b6da72e7387e8d0c114dee5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_E
XITED,CreatedAt:1758925859611809924,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2xzt8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e1bbf119-387c-430c-b64f-3412376a93d5,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a156c91664dbba69de3c62daeddadd17f3ea62e719400eb7575de0edc7b237,PodSandboxId:9afc50bd4655284b9f7792b29a82a64de5dedc47aca1be7f59ac0cdba9596cc2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc
7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1758925851071855426,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c5fsh,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 1d4706ed-d612-42b6-8ce7-1c3b53174964,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d1b73931795de08ae3fe25c28a68cc48cf2a1f358986388b7e68cef1254a49,PodSandboxId:4a03161ad649c86ef5f6fababc00d5c61e2b112f0745952010807bb23df9c76b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1758925842020813340,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d20fd4fa-1f62-423e-a836-f66893f73949,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ce52a782ec641bc3054d3b7fecfbf5015f0255a42d1d8b2817d0e21a3cb64f,PodSandboxId:164540b56841d45bbea8b25fd820262a02ff3dc521d2483ef4d9fa6bf455840f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:
docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1758925807369906434,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cdb8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42dc693-f8dc-488e-a6df-11603c5146c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcddaa36c6f8e064b9e65b380137f789e7379644bdf02c4ce91a8481abe8aed,PodSandboxId:6f9b04761677876630b638de388847d0cd9b141a8301620dc9f0f8995da05593,Metadata:&ContainerMetadata{Name:storage-provision
er,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758925807170331625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805513c7-5529-4f0e-bbe6-de0e474ba2ba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d80adcca025aaef75e6e06f57e8799486cfe77e98b93797c20bec0f4dab49ed,PodSandboxId:4a821382e4a7e40f22aaab81e8bb96cf30745916ba0c162f9efbaed010997c81,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&Im
ageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758925793811470387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vcwdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a3371fb-cab7-4a7e-8907-e11b45338ed0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c093002446e01a4b5ed0e5bf25dd5e04c44bbdf58a99648d2615cbc9a8df29,PodSandboxId:e6bd3271dd6ac5f8ce745e3c6d5ed6c1c8b6e94486e2549e260561de7a8d9694,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758925792110209484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lldr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3500915-4e56-473c-8674-5ea502daaac6,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d546b62051d6981b70d7a64cf0bb498a74b8a5f034aea3d6ca372b748273dd08,PodSandboxId:423d307a9a2ff59da5cb2aee768cb0e27b277a107aac0035e742bc3536de2a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758925780689458095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b3ab0a34880a8a828bd4ec7b048073,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c14c61340bfb60319237ab9cdb7743d04777d104299829a2666627dc25b549ce,PodSandboxId:f8b0370a64577d26d2005616cef004867bab0ed7612bdb68674b97c0cd4ddc44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758925780691387127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ebb9a6bc31069e8c997f3161744cee,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name
\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96b63fa3232c4e36cd45a617624415a34216ab78bd0288ce20498e29c613de46,PodSandboxId:00739f8fdf1571de344a91ed170311f30ce26aae40b8fd9e24b9f24e7340f067,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758925780648843877,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7596254403ac958c412ddaf08adf07c0,},An
notations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71804cd6c0cd12a68a0fcc99788afd0951532dc500dcac6297763fb881c5193,PodSandboxId:a5800cbdc6985f866308b5ec875d6185a6c0c7223e4b69157d6014fad076bb3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758925780660818663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-3306
74,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd7d325e4c1d60f88ed2ac4cd01e5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b706bb6c-9065-4f05-81fd-c30b67d0aae1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.258680576Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc38d6d2-971b-4da2-abf3-632f445a10a4 name=/runtime.v1.RuntimeService/Version
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.258890222Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc38d6d2-971b-4da2-abf3-632f445a10a4 name=/runtime.v1.RuntimeService/Version
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.260516816Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=41b39699-3e96-4bde-8158-ee7ad3ac38e7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.261958197Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758926408261923048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519332,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41b39699-3e96-4bde-8158-ee7ad3ac38e7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.262803593Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67e1c6db-9100-4465-b75f-3dd26d0ec8a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.262907788Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67e1c6db-9100-4465-b75f-3dd26d0ec8a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.263364600Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6b78ecb5174fb2b7f86cd2c4e767d94697649a74394ddfbca2309130d6eaa8c,PodSandboxId:b3f170d8fa06d1d92adb39a7915d41ba2dd5740703a6e0c23e6edf4dbe1e00e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758925895547677835,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 445fcb70-08b0-49c8-b65c-eda21a3d6feb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041e5164edc9638ac7a5e3fb9b42dc3d246076e9c3024a78f6c14deca9aadc24,PodSandboxId:8725b0863596a05617b28f599b741b374f47553116849600ccb62872a79198c1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1758925876391663761,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-kbqsf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9dd82dc5-ecb0-431a-8606-e0b251a33909,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:051406a4cc7e967f8fd03e63ab8bb5dbef64f6fb8e2ca56e77fac0d9cde5d0b0,PodSandboxId:1a394bb7ee033d4fc2928bdf9c7146d58a16612bf1d25551c70d873eb6356748,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758925872035993264,
Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vpbtt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae336bc2-9fe3-4fb6-993b-62ec6c833145,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53bb00230c0915fb72c8594f89b327ea93de60b21f74bb8bbea98be7af7d5c0,PodSandboxId:b1250bf09824f123677325475218e4cf4789bc966b6da72e7387e8d0c114dee5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_E
XITED,CreatedAt:1758925859611809924,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2xzt8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e1bbf119-387c-430c-b64f-3412376a93d5,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a156c91664dbba69de3c62daeddadd17f3ea62e719400eb7575de0edc7b237,PodSandboxId:9afc50bd4655284b9f7792b29a82a64de5dedc47aca1be7f59ac0cdba9596cc2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc
7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1758925851071855426,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c5fsh,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 1d4706ed-d612-42b6-8ce7-1c3b53174964,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d1b73931795de08ae3fe25c28a68cc48cf2a1f358986388b7e68cef1254a49,PodSandboxId:4a03161ad649c86ef5f6fababc00d5c61e2b112f0745952010807bb23df9c76b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1758925842020813340,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d20fd4fa-1f62-423e-a836-f66893f73949,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ce52a782ec641bc3054d3b7fecfbf5015f0255a42d1d8b2817d0e21a3cb64f,PodSandboxId:164540b56841d45bbea8b25fd820262a02ff3dc521d2483ef4d9fa6bf455840f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:
docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1758925807369906434,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cdb8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42dc693-f8dc-488e-a6df-11603c5146c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcddaa36c6f8e064b9e65b380137f789e7379644bdf02c4ce91a8481abe8aed,PodSandboxId:6f9b04761677876630b638de388847d0cd9b141a8301620dc9f0f8995da05593,Metadata:&ContainerMetadata{Name:storage-provision
er,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758925807170331625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805513c7-5529-4f0e-bbe6-de0e474ba2ba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d80adcca025aaef75e6e06f57e8799486cfe77e98b93797c20bec0f4dab49ed,PodSandboxId:4a821382e4a7e40f22aaab81e8bb96cf30745916ba0c162f9efbaed010997c81,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&Im
ageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758925793811470387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vcwdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a3371fb-cab7-4a7e-8907-e11b45338ed0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c093002446e01a4b5ed0e5bf25dd5e04c44bbdf58a99648d2615cbc9a8df29,PodSandboxId:e6bd3271dd6ac5f8ce745e3c6d5ed6c1c8b6e94486e2549e260561de7a8d9694,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758925792110209484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lldr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3500915-4e56-473c-8674-5ea502daaac6,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d546b62051d6981b70d7a64cf0bb498a74b8a5f034aea3d6ca372b748273dd08,PodSandboxId:423d307a9a2ff59da5cb2aee768cb0e27b277a107aac0035e742bc3536de2a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758925780689458095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b3ab0a34880a8a828bd4ec7b048073,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c14c61340bfb60319237ab9cdb7743d04777d104299829a2666627dc25b549ce,PodSandboxId:f8b0370a64577d26d2005616cef004867bab0ed7612bdb68674b97c0cd4ddc44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758925780691387127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ebb9a6bc31069e8c997f3161744cee,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name
\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96b63fa3232c4e36cd45a617624415a34216ab78bd0288ce20498e29c613de46,PodSandboxId:00739f8fdf1571de344a91ed170311f30ce26aae40b8fd9e24b9f24e7340f067,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758925780648843877,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7596254403ac958c412ddaf08adf07c0,},An
notations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71804cd6c0cd12a68a0fcc99788afd0951532dc500dcac6297763fb881c5193,PodSandboxId:a5800cbdc6985f866308b5ec875d6185a6c0c7223e4b69157d6014fad076bb3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758925780660818663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-3306
74,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd7d325e4c1d60f88ed2ac4cd01e5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67e1c6db-9100-4465-b75f-3dd26d0ec8a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.300464973Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6e481d7-edc5-468b-b43b-ad1de265c1a2 name=/runtime.v1.RuntimeService/Version
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.300557544Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6e481d7-edc5-468b-b43b-ad1de265c1a2 name=/runtime.v1.RuntimeService/Version
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.302594754Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c0d528a-d232-426d-affe-51fcf645c6ac name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.303763730Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758926408303739931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519332,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c0d528a-d232-426d-affe-51fcf645c6ac name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.304629548Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0585b43-6311-452b-b2fa-322fb33e8e3f name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.304685476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0585b43-6311-452b-b2fa-322fb33e8e3f name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:40:08 addons-330674 crio[823]: time="2025-09-26 22:40:08.304981485Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6b78ecb5174fb2b7f86cd2c4e767d94697649a74394ddfbca2309130d6eaa8c,PodSandboxId:b3f170d8fa06d1d92adb39a7915d41ba2dd5740703a6e0c23e6edf4dbe1e00e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758925895547677835,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 445fcb70-08b0-49c8-b65c-eda21a3d6feb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041e5164edc9638ac7a5e3fb9b42dc3d246076e9c3024a78f6c14deca9aadc24,PodSandboxId:8725b0863596a05617b28f599b741b374f47553116849600ccb62872a79198c1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1758925876391663761,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-kbqsf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9dd82dc5-ecb0-431a-8606-e0b251a33909,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:051406a4cc7e967f8fd03e63ab8bb5dbef64f6fb8e2ca56e77fac0d9cde5d0b0,PodSandboxId:1a394bb7ee033d4fc2928bdf9c7146d58a16612bf1d25551c70d873eb6356748,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758925872035993264,
Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vpbtt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae336bc2-9fe3-4fb6-993b-62ec6c833145,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53bb00230c0915fb72c8594f89b327ea93de60b21f74bb8bbea98be7af7d5c0,PodSandboxId:b1250bf09824f123677325475218e4cf4789bc966b6da72e7387e8d0c114dee5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_E
XITED,CreatedAt:1758925859611809924,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2xzt8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e1bbf119-387c-430c-b64f-3412376a93d5,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a156c91664dbba69de3c62daeddadd17f3ea62e719400eb7575de0edc7b237,PodSandboxId:9afc50bd4655284b9f7792b29a82a64de5dedc47aca1be7f59ac0cdba9596cc2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc
7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1758925851071855426,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c5fsh,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 1d4706ed-d612-42b6-8ce7-1c3b53174964,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d1b73931795de08ae3fe25c28a68cc48cf2a1f358986388b7e68cef1254a49,PodSandboxId:4a03161ad649c86ef5f6fababc00d5c61e2b112f0745952010807bb23df9c76b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1758925842020813340,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d20fd4fa-1f62-423e-a836-f66893f73949,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ce52a782ec641bc3054d3b7fecfbf5015f0255a42d1d8b2817d0e21a3cb64f,PodSandboxId:164540b56841d45bbea8b25fd820262a02ff3dc521d2483ef4d9fa6bf455840f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:
docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1758925807369906434,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cdb8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42dc693-f8dc-488e-a6df-11603c5146c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcddaa36c6f8e064b9e65b380137f789e7379644bdf02c4ce91a8481abe8aed,PodSandboxId:6f9b04761677876630b638de388847d0cd9b141a8301620dc9f0f8995da05593,Metadata:&ContainerMetadata{Name:storage-provision
er,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758925807170331625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805513c7-5529-4f0e-bbe6-de0e474ba2ba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d80adcca025aaef75e6e06f57e8799486cfe77e98b93797c20bec0f4dab49ed,PodSandboxId:4a821382e4a7e40f22aaab81e8bb96cf30745916ba0c162f9efbaed010997c81,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&Im
ageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758925793811470387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vcwdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a3371fb-cab7-4a7e-8907-e11b45338ed0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c093002446e01a4b5ed0e5bf25dd5e04c44bbdf58a99648d2615cbc9a8df29,PodSandboxId:e6bd3271dd6ac5f8ce745e3c6d5ed6c1c8b6e94486e2549e260561de7a8d9694,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758925792110209484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lldr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3500915-4e56-473c-8674-5ea502daaac6,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d546b62051d6981b70d7a64cf0bb498a74b8a5f034aea3d6ca372b748273dd08,PodSandboxId:423d307a9a2ff59da5cb2aee768cb0e27b277a107aac0035e742bc3536de2a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758925780689458095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b3ab0a34880a8a828bd4ec7b048073,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c14c61340bfb60319237ab9cdb7743d04777d104299829a2666627dc25b549ce,PodSandboxId:f8b0370a64577d26d2005616cef004867bab0ed7612bdb68674b97c0cd4ddc44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758925780691387127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ebb9a6bc31069e8c997f3161744cee,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name
\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96b63fa3232c4e36cd45a617624415a34216ab78bd0288ce20498e29c613de46,PodSandboxId:00739f8fdf1571de344a91ed170311f30ce26aae40b8fd9e24b9f24e7340f067,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758925780648843877,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7596254403ac958c412ddaf08adf07c0,},An
notations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71804cd6c0cd12a68a0fcc99788afd0951532dc500dcac6297763fb881c5193,PodSandboxId:a5800cbdc6985f866308b5ec875d6185a6c0c7223e4b69157d6014fad076bb3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758925780660818663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-3306
74,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd7d325e4c1d60f88ed2ac4cd01e5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e0585b43-6311-452b-b2fa-322fb33e8e3f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c6b78ecb5174f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          8 minutes ago       Running             busybox                   0                   b3f170d8fa06d       busybox
	041e5164edc96       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             8 minutes ago       Running             controller                0                   8725b0863596a       ingress-nginx-controller-9cc49f96f-kbqsf
	051406a4cc7e9       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                             8 minutes ago       Exited              patch                     2                   1a394bb7ee033       ingress-nginx-admission-patch-vpbtt
	d53bb00230c09       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   9 minutes ago       Exited              create                    0                   b1250bf09824f       ingress-nginx-admission-create-2xzt8
	79a156c91664d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            9 minutes ago       Running             gadget                    0                   9afc50bd46552       gadget-c5fsh
	08d1b73931795       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               9 minutes ago       Running             minikube-ingress-dns      0                   4a03161ad649c       kube-ingress-dns-minikube
	22ce52a782ec6       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     10 minutes ago      Running             amd-gpu-device-plugin     0                   164540b56841d       amd-gpu-device-plugin-cdb8s
	7dcddaa36c6f8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner       0                   6f9b047616778       storage-provisioner
	4d80adcca025a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             10 minutes ago      Running             coredns                   0                   4a821382e4a7e       coredns-66bc5c9577-vcwdm
	91c093002446e       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             10 minutes ago      Running             kube-proxy                0                   e6bd3271dd6ac       kube-proxy-lldr6
	c14c61340bfb6       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             10 minutes ago      Running             kube-controller-manager   0                   f8b0370a64577       kube-controller-manager-addons-330674
	d546b62051d69       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             10 minutes ago      Running             etcd                      0                   423d307a9a2ff       etcd-addons-330674
	d71804cd6c0cd       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             10 minutes ago      Running             kube-scheduler            0                   a5800cbdc6985       kube-scheduler-addons-330674
	96b63fa3232c4       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             10 minutes ago      Running             kube-apiserver            0                   00739f8fdf157       kube-apiserver-addons-330674
	
	
	==> coredns [4d80adcca025aaef75e6e06f57e8799486cfe77e98b93797c20bec0f4dab49ed] <==
	[INFO] 10.244.0.8:47574 - 21867 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000257654s
	[INFO] 10.244.0.8:47574 - 26774 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000212097s
	[INFO] 10.244.0.8:47574 - 28009 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.001958391s
	[INFO] 10.244.0.8:47574 - 26885 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000119263s
	[INFO] 10.244.0.8:47574 - 5147 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000092914s
	[INFO] 10.244.0.8:47574 - 63848 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00014382s
	[INFO] 10.244.0.8:47574 - 22153 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00016125s
	[INFO] 10.244.0.8:33854 - 30980 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000169209s
	[INFO] 10.244.0.8:33854 - 31323 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000369966s
	[INFO] 10.244.0.8:44393 - 54969 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066964s
	[INFO] 10.244.0.8:44393 - 55232 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000145932s
	[INFO] 10.244.0.8:38008 - 63546 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000148543s
	[INFO] 10.244.0.8:38008 - 63995 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000188374s
	[INFO] 10.244.0.8:57521 - 19791 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000072445s
	[INFO] 10.244.0.8:57521 - 19991 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000105577s
	[INFO] 10.244.0.23:33438 - 31331 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00059389s
	[INFO] 10.244.0.23:52290 - 40336 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000131355s
	[INFO] 10.244.0.23:36973 - 47600 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124178s
	[INFO] 10.244.0.23:58766 - 34961 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000284537s
	[INFO] 10.244.0.23:51619 - 10278 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000077755s
	[INFO] 10.244.0.23:56734 - 63793 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000152417s
	[INFO] 10.244.0.23:44833 - 26370 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000890787s
	[INFO] 10.244.0.23:51260 - 4851 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001537806s
	[INFO] 10.244.0.26:37540 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000260275s
	[INFO] 10.244.0.26:54969 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00023223s
	
	
	==> describe nodes <==
	Name:               addons-330674
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-330674
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=addons-330674
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_29_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-330674
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:29:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-330674
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:40:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:35:04 +0000   Fri, 26 Sep 2025 22:29:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:35:04 +0000   Fri, 26 Sep 2025 22:29:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:35:04 +0000   Fri, 26 Sep 2025 22:29:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:35:04 +0000   Fri, 26 Sep 2025 22:29:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.36
	  Hostname:    addons-330674
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 0270d5ce774d47cc84b7b73291b9eb86
	  System UUID:                0270d5ce-774d-47cc-84b7-b73291b9eb86
	  Boot ID:                    261e85a6-9bd4-4867-9bbb-7559b9c83c19
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m36s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m50s
	  default                     test-local-path                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m7s
	  gadget                      gadget-c5fsh                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-kbqsf    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 amd-gpu-device-plugin-cdb8s                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-vcwdm                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-addons-330674                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-330674                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-330674       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-lldr6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-330674                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node addons-330674 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node addons-330674 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node addons-330674 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node addons-330674 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node addons-330674 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node addons-330674 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m                kubelet          Node addons-330674 status is now: NodeReady
	  Normal  RegisteredNode           10m                node-controller  Node addons-330674 event: Registered Node addons-330674 in Controller
	
	
	==> dmesg <==
	[  +5.398131] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.148040] kauditd_printk_skb: 243 callbacks suppressed
	[Sep26 22:30] kauditd_printk_skb: 245 callbacks suppressed
	[  +0.000005] kauditd_printk_skb: 357 callbacks suppressed
	[ +15.526203] kauditd_printk_skb: 172 callbacks suppressed
	[  +5.602328] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.205959] kauditd_printk_skb: 32 callbacks suppressed
	[  +8.429608] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.063342] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.131781] kauditd_printk_skb: 20 callbacks suppressed
	[Sep26 22:31] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.000062] kauditd_printk_skb: 113 callbacks suppressed
	[  +1.652404] kauditd_printk_skb: 121 callbacks suppressed
	[  +3.064622] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.314802] kauditd_printk_skb: 89 callbacks suppressed
	[  +5.826353] kauditd_printk_skb: 5 callbacks suppressed
	[  +2.328743] kauditd_printk_skb: 38 callbacks suppressed
	[  +8.807828] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.004597] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.597860] kauditd_printk_skb: 38 callbacks suppressed
	[Sep26 22:32] kauditd_printk_skb: 99 callbacks suppressed
	[  +0.934983] kauditd_printk_skb: 118 callbacks suppressed
	[  +0.000162] kauditd_printk_skb: 173 callbacks suppressed
	[ +19.393202] kauditd_printk_skb: 26 callbacks suppressed
	[Sep26 22:38] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [d546b62051d6981b70d7a64cf0bb498a74b8a5f034aea3d6ca372b748273dd08] <==
	{"level":"info","ts":"2025-09-26T22:30:58.688471Z","caller":"traceutil/trace.go:172","msg":"trace[1166098963] transaction","detail":"{read_only:false; response_revision:1047; number_of_response:1; }","duration":"115.285521ms","start":"2025-09-26T22:30:58.573171Z","end":"2025-09-26T22:30:58.688457Z","steps":["trace[1166098963] 'process raft request'  (duration: 115.158938ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:06.385950Z","caller":"traceutil/trace.go:172","msg":"trace[171528856] transaction","detail":"{read_only:false; response_revision:1110; number_of_response:1; }","duration":"207.371807ms","start":"2025-09-26T22:31:06.178555Z","end":"2025-09-26T22:31:06.385927Z","steps":["trace[171528856] 'process raft request'  (duration: 207.21509ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:13.467583Z","caller":"traceutil/trace.go:172","msg":"trace[2032340984] transaction","detail":"{read_only:false; response_revision:1152; number_of_response:1; }","duration":"148.79533ms","start":"2025-09-26T22:31:13.318772Z","end":"2025-09-26T22:31:13.467568Z","steps":["trace[2032340984] 'process raft request'  (duration: 148.072718ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:14.637720Z","caller":"traceutil/trace.go:172","msg":"trace[1422923240] linearizableReadLoop","detail":"{readStateIndex:1185; appliedIndex:1185; }","duration":"228.404518ms","start":"2025-09-26T22:31:14.409297Z","end":"2025-09-26T22:31:14.637701Z","steps":["trace[1422923240] 'read index received'  (duration: 228.396687ms)","trace[1422923240] 'applied index is now lower than readState.Index'  (duration: 6.717µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:31:14.637858Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"228.541405ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:31:14.637889Z","caller":"traceutil/trace.go:172","msg":"trace[1734423282] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1152; }","duration":"228.589602ms","start":"2025-09-26T22:31:14.409293Z","end":"2025-09-26T22:31:14.637882Z","steps":["trace[1734423282] 'agreement among raft nodes before linearized reading'  (duration: 228.514609ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:14.637888Z","caller":"traceutil/trace.go:172","msg":"trace[1864404804] transaction","detail":"{read_only:false; response_revision:1153; number_of_response:1; }","duration":"251.449676ms","start":"2025-09-26T22:31:14.386428Z","end":"2025-09-26T22:31:14.637877Z","steps":["trace[1864404804] 'process raft request'  (duration: 251.335525ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:31:14.638161Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"167.799737ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-09-26T22:31:14.638184Z","caller":"traceutil/trace.go:172","msg":"trace[586291321] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1153; }","duration":"167.828944ms","start":"2025-09-26T22:31:14.470349Z","end":"2025-09-26T22:31:14.638178Z","steps":["trace[586291321] 'agreement among raft nodes before linearized reading'  (duration: 167.686895ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:16.005233Z","caller":"traceutil/trace.go:172","msg":"trace[1859190441] linearizableReadLoop","detail":"{readStateIndex:1191; appliedIndex:1191; }","duration":"205.698958ms","start":"2025-09-26T22:31:15.799518Z","end":"2025-09-26T22:31:16.005217Z","steps":["trace[1859190441] 'read index received'  (duration: 205.694211ms)","trace[1859190441] 'applied index is now lower than readState.Index'  (duration: 3.689µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:31:16.005429Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.897121ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:31:16.005489Z","caller":"traceutil/trace.go:172","msg":"trace[1859758599] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1158; }","duration":"205.970508ms","start":"2025-09-26T22:31:15.799512Z","end":"2025-09-26T22:31:16.005483Z","steps":["trace[1859758599] 'agreement among raft nodes before linearized reading'  (duration: 205.868975ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:31:16.005819Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"152.13092ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.36\" limit:1 ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2025-09-26T22:31:16.005907Z","caller":"traceutil/trace.go:172","msg":"trace[658261611] range","detail":"{range_begin:/registry/masterleases/192.168.39.36; range_end:; response_count:1; response_revision:1159; }","duration":"152.225874ms","start":"2025-09-26T22:31:15.853673Z","end":"2025-09-26T22:31:16.005899Z","steps":["trace[658261611] 'agreement among raft nodes before linearized reading'  (duration: 152.075231ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:16.006294Z","caller":"traceutil/trace.go:172","msg":"trace[630460783] transaction","detail":"{read_only:false; response_revision:1159; number_of_response:1; }","duration":"208.955996ms","start":"2025-09-26T22:31:15.797328Z","end":"2025-09-26T22:31:16.006284Z","steps":["trace[630460783] 'process raft request'  (duration: 207.967404ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:20.645825Z","caller":"traceutil/trace.go:172","msg":"trace[1825086522] transaction","detail":"{read_only:false; response_revision:1196; number_of_response:1; }","duration":"142.24064ms","start":"2025-09-26T22:31:20.503572Z","end":"2025-09-26T22:31:20.645813Z","steps":["trace[1825086522] 'process raft request'  (duration: 142.114273ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:29.646399Z","caller":"traceutil/trace.go:172","msg":"trace[1097200160] transaction","detail":"{read_only:false; response_revision:1236; number_of_response:1; }","duration":"169.236279ms","start":"2025-09-26T22:31:29.477137Z","end":"2025-09-26T22:31:29.646373Z","steps":["trace[1097200160] 'process raft request'  (duration: 169.149315ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:59.038214Z","caller":"traceutil/trace.go:172","msg":"trace[287194860] linearizableReadLoop","detail":"{readStateIndex:1476; appliedIndex:1476; }","duration":"165.591492ms","start":"2025-09-26T22:31:58.872592Z","end":"2025-09-26T22:31:59.038183Z","steps":["trace[287194860] 'read index received'  (duration: 165.586106ms)","trace[287194860] 'applied index is now lower than readState.Index'  (duration: 4.553µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:31:59.038434Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.843902ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:31:59.038513Z","caller":"traceutil/trace.go:172","msg":"trace[1185068248] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1431; }","duration":"165.936326ms","start":"2025-09-26T22:31:58.872567Z","end":"2025-09-26T22:31:59.038503Z","steps":["trace[1185068248] 'agreement among raft nodes before linearized reading'  (duration: 165.795637ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:59.038517Z","caller":"traceutil/trace.go:172","msg":"trace[270500941] transaction","detail":"{read_only:false; response_revision:1432; number_of_response:1; }","duration":"228.1352ms","start":"2025-09-26T22:31:58.810371Z","end":"2025-09-26T22:31:59.038506Z","steps":["trace[270500941] 'process raft request'  (duration: 227.991076ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:32:29.353426Z","caller":"traceutil/trace.go:172","msg":"trace[1234175230] transaction","detail":"{read_only:false; response_revision:1649; number_of_response:1; }","duration":"102.357293ms","start":"2025-09-26T22:32:29.251056Z","end":"2025-09-26T22:32:29.353413Z","steps":["trace[1234175230] 'process raft request'  (duration: 102.235629ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:39:42.112462Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1859}
	{"level":"info","ts":"2025-09-26T22:39:42.196403Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1859,"took":"83.162065ms","hash":1193661457,"current-db-size-bytes":6254592,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":4169728,"current-db-size-in-use":"4.2 MB"}
	{"level":"info","ts":"2025-09-26T22:39:42.196458Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1193661457,"revision":1859,"compact-revision":-1}
	
	
	==> kernel <==
	 22:40:08 up 10 min,  0 users,  load average: 0.39, 0.46, 0.43
	Linux addons-330674 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [96b63fa3232c4e36cd45a617624415a34216ab78bd0288ce20498e29c613de46] <==
	I0926 22:33:34.675154       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:34:33.914198       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:34:37.466185       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:35:58.428524       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:36:01.607736       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:37:20.743328       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:37:22.383179       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:38:21.120424       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:38:22.262666       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0926 22:38:22.263020       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0926 22:38:22.306149       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0926 22:38:22.306477       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0926 22:38:22.322723       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0926 22:38:22.322777       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0926 22:38:22.341785       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0926 22:38:22.342411       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0926 22:38:22.377694       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0926 22:38:22.378361       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0926 22:38:23.321968       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0926 22:38:23.378873       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0926 22:38:23.519656       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0926 22:38:25.108013       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:39:21.651721       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:39:30.380312       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:39:43.932144       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [c14c61340bfb60319237ab9cdb7743d04777d104299829a2666627dc25b549ce] <==
	E0926 22:38:39.176745       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:38:39.178582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:38:41.022538       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:38:41.024483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:38:42.550687       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:38:42.551983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0926 22:38:50.992886       1 reconciler.go:364] "attacherDetacher.AttachVolume started" logger="persistentvolume-attach-detach-controller" volumeName="kubernetes.io/csi/hostpath.csi.k8s.io^a8c51f6e-9b28-11f0-82b0-8e72b916b739" nodeName="addons-330674" scheduledPods=["default/task-pv-pod"]
	I0926 22:38:51.167708       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0926 22:38:51.167749       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:38:51.204137       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0926 22:38:51.204305       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0926 22:38:55.006195       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:38:55.007284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:39:03.908147       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:39:03.909433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:39:04.716220       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:39:04.717448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:39:29.060808       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:39:29.061935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:39:45.077752       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:39:45.078828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:39:46.063177       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:39:46.065052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:40:05.039941       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:40:05.040950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [91c093002446e01a4b5ed0e5bf25dd5e04c44bbdf58a99648d2615cbc9a8df29] <==
	I0926 22:29:52.750738       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:29:52.855140       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:29:52.855184       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.36"]
	E0926 22:29:52.855251       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:29:53.034433       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0926 22:29:53.034497       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0926 22:29:53.034529       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:29:53.056167       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:29:53.056873       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:29:53.056887       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:29:53.081717       1 config.go:309] "Starting node config controller"
	I0926 22:29:53.081753       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:29:53.081761       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:29:53.082169       1 config.go:200] "Starting service config controller"
	I0926 22:29:53.082179       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:29:53.082197       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:29:53.082201       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:29:53.082211       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:29:53.082215       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:29:53.183212       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0926 22:29:53.183245       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:29:53.183259       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d71804cd6c0cd12a68a0fcc99788afd0951532dc500dcac6297763fb881c5193] <==
	E0926 22:29:43.950921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:29:43.950997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:29:43.952216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0926 22:29:43.952553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:29:43.952622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:29:43.953940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:29:43.954122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:29:43.954127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 22:29:43.955446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:29:43.955726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:29:43.955808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:29:43.956032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:29:43.956048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:29:44.761681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:29:44.783680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:29:44.813163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:29:44.863573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:29:44.938817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 22:29:44.949980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:29:45.133806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:29:45.176477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:29:45.243697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:29:45.335227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:29:45.431436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0926 22:29:48.238209       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 26 22:39:17 addons-330674 kubelet[1505]: E0926 22:39:17.148465    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926357147727114  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:39:17 addons-330674 kubelet[1505]: E0926 22:39:17.803291    1505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6ceec17b-136a-4af6-8734-faa16ecd08bc"
	Sep 26 22:39:20 addons-330674 kubelet[1505]: E0926 22:39:20.805535    1505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cf3126e1-0cb8-4c12-8028-997b82450384"
	Sep 26 22:39:25 addons-330674 kubelet[1505]: W0926 22:39:25.430322    1505 logging.go:55] [core] [Channel #71 SubChannel #72]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Sep 26 22:39:27 addons-330674 kubelet[1505]: E0926 22:39:27.150759    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926367150425208  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:39:27 addons-330674 kubelet[1505]: E0926 22:39:27.150805    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926367150425208  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:39:29 addons-330674 kubelet[1505]: E0926 22:39:29.811823    1505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8d821a63-845c-4938-9b63-a3f7ca3a23d9"
	Sep 26 22:39:31 addons-330674 kubelet[1505]: E0926 22:39:31.803501    1505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6ceec17b-136a-4af6-8734-faa16ecd08bc"
	Sep 26 22:39:36 addons-330674 kubelet[1505]: I0926 22:39:36.803837    1505 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-cdb8s" secret="" err="secret \"gcp-auth\" not found"
	Sep 26 22:39:37 addons-330674 kubelet[1505]: E0926 22:39:37.153673    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926377153299429  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:39:37 addons-330674 kubelet[1505]: E0926 22:39:37.153698    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926377153299429  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:39:44 addons-330674 kubelet[1505]: E0926 22:39:44.805543    1505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8d821a63-845c-4938-9b63-a3f7ca3a23d9"
	Sep 26 22:39:46 addons-330674 kubelet[1505]: E0926 22:39:46.804787    1505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6ceec17b-136a-4af6-8734-faa16ecd08bc"
	Sep 26 22:39:47 addons-330674 kubelet[1505]: E0926 22:39:47.157305    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926387156733856  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:39:47 addons-330674 kubelet[1505]: E0926 22:39:47.157352    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926387156733856  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:39:57 addons-330674 kubelet[1505]: E0926 22:39:57.159298    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926397158893783  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:39:57 addons-330674 kubelet[1505]: E0926 22:39:57.159349    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926397158893783  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:40:01 addons-330674 kubelet[1505]: E0926 22:40:01.803671    1505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6ceec17b-136a-4af6-8734-faa16ecd08bc"
	Sep 26 22:40:04 addons-330674 kubelet[1505]: I0926 22:40:04.803530    1505 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 26 22:40:04 addons-330674 kubelet[1505]: E0926 22:40:04.954426    1505 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 26 22:40:04 addons-330674 kubelet[1505]: E0926 22:40:04.954471    1505 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 26 22:40:04 addons-330674 kubelet[1505]: E0926 22:40:04.954686    1505 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(cf3126e1-0cb8-4c12-8028-997b82450384): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:40:04 addons-330674 kubelet[1505]: E0926 22:40:04.954717    1505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cf3126e1-0cb8-4c12-8028-997b82450384"
	Sep 26 22:40:07 addons-330674 kubelet[1505]: E0926 22:40:07.162437    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926407161937701  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:40:07 addons-330674 kubelet[1505]: E0926 22:40:07.162457    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926407161937701  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	
	
	==> storage-provisioner [7dcddaa36c6f8e064b9e65b380137f789e7379644bdf02c4ce91a8481abe8aed] <==
	W0926 22:39:43.915758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:45.919147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:45.925222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:47.929678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:47.935269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:49.938902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:49.946466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:51.949700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:51.955038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:53.960017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:53.968256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:55.972694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:55.982870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:57.986927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:57.993899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:39:59.998758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:00.007688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:02.011847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:02.017438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:04.020682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:04.028720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:06.031987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:06.037704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:08.043114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:40:08.053916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-330674 -n addons-330674
helpers_test.go:269: (dbg) Run:  kubectl --context addons-330674 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-2xzt8 ingress-nginx-admission-patch-vpbtt
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-330674 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-2xzt8 ingress-nginx-admission-patch-vpbtt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-330674 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-2xzt8 ingress-nginx-admission-patch-vpbtt: exit status 1 (85.707366ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-330674/192.168.39.36
	Start Time:       Fri, 26 Sep 2025 22:32:06 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xvdz7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xvdz7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  8m3s                  default-scheduler  Successfully assigned default/nginx to addons-330674
	  Normal   BackOff    49s (x11 over 7m31s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     49s (x11 over 7m31s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    35s (x5 over 8m2s)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     5s (x5 over 7m32s)    kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     5s (x5 over 7m32s)    kubelet            Error: ErrImagePull
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-330674/192.168.39.36
	Start Time:       Fri, 26 Sep 2025 22:32:18 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pzlv4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-pzlv4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  7m51s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-330674
	  Normal   Pulling    2m16s (x4 over 7m50s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     62s (x4 over 6m31s)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     62s (x4 over 6m31s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    8s (x9 over 6m30s)     kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     8s (x9 over 6m30s)     kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-330674/192.168.39.36
	Start Time:       Fri, 26 Sep 2025 22:32:07 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gbhvc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-gbhvc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  8m2s                default-scheduler  Successfully assigned default/test-local-path to addons-330674
	  Warning  Failed     92s (x4 over 7m1s)  kubelet            Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     92s (x4 over 7m1s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    25s (x9 over 7m1s)  kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     25s (x9 over 7m1s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    12s (x5 over 8m1s)  kubelet            Pulling image "busybox:stable"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2xzt8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vpbtt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-330674 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-2xzt8 ingress-nginx-admission-patch-vpbtt: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-330674 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-330674 addons disable ingress-dns --alsologtostderr -v=1: (1.346828805s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-330674 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-330674 addons disable ingress --alsologtostderr -v=1: (7.853214311s)
--- FAIL: TestAddons/parallel/Ingress (492.42s)

                                                
                                    
x
+
TestAddons/parallel/CSI (376.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0926 22:32:13.653584    9914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0926 22:32:13.668330    9914 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0926 22:32:13.668368    9914 kapi.go:107] duration metric: took 14.796852ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 14.813042ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-330674 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-330674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-330674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-330674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-330674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-330674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-330674 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-330674 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [6ceec17b-136a-4af6-8734-faa16ecd08bc] Pending
helpers_test.go:352: "task-pv-pod" [6ceec17b-136a-4af6-8734-faa16ecd08bc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-330674 -n addons-330674
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-09-26 22:38:19.233015693 +0000 UTC m=+565.063415144
addons_test.go:567: (dbg) Run:  kubectl --context addons-330674 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-330674 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-330674/192.168.39.36
Start Time:       Fri, 26 Sep 2025 22:32:18 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.30
IPs:
IP:  10.244.0.30
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP (http-server)
Host Port:      0/TCP (http-server)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pzlv4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-pzlv4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m1s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-330674
Warning  Failed     73s (x3 over 4m41s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     73s (x3 over 4m41s)  kubelet            Error: ErrImagePull
Normal   BackOff    37s (x5 over 4m40s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     37s (x5 over 4m40s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    26s (x4 over 6m)     kubelet            Pulling image "docker.io/nginx"
addons_test.go:567: (dbg) Run:  kubectl --context addons-330674 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-330674 logs task-pv-pod -n default: exit status 1 (75.541159ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-330674 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-330674 -n addons-330674
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-330674 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-330674 logs -n 25: (1.448208847s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-957403                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-957403 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ start   │ -o=json --download-only -p download-only-123956 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                │ download-only-123956 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ delete  │ -p download-only-123956                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-123956 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ delete  │ -p download-only-957403                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-957403 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ delete  │ -p download-only-123956                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-123956 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ start   │ --download-only -p binary-mirror-019280 --alsologtostderr --binary-mirror http://127.0.0.1:43721 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-019280 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ delete  │ -p binary-mirror-019280                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-019280 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ addons  │ enable dashboard -p addons-330674                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-330674                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ start   │ -p addons-330674 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:31 UTC │
	│ addons  │ addons-330674 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:31 UTC │ 26 Sep 25 22:31 UTC │
	│ addons  │ addons-330674 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:31 UTC │ 26 Sep 25 22:31 UTC │
	│ addons  │ enable headlamp -p addons-330674 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:31 UTC │ 26 Sep 25 22:31 UTC │
	│ addons  │ addons-330674 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:31 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-330674 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:31 UTC │ 26 Sep 25 22:31 UTC │
	│ addons  │ addons-330674 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ ip      │ addons-330674 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-330674 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-330674 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-330674 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-330674                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-330674 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-330674 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-330674 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:29:07
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:29:07.131240   10530 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:29:07.131540   10530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:07.131551   10530 out.go:374] Setting ErrFile to fd 2...
	I0926 22:29:07.131555   10530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:07.131846   10530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	I0926 22:29:07.132459   10530 out.go:368] Setting JSON to false
	I0926 22:29:07.133384   10530 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":692,"bootTime":1758925055,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:29:07.133472   10530 start.go:140] virtualization: kvm guest
	I0926 22:29:07.135388   10530 out.go:179] * [addons-330674] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:29:07.136853   10530 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:29:07.136850   10530 notify.go:220] Checking for updates...
	I0926 22:29:07.138284   10530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:29:07.139566   10530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 22:29:07.140695   10530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 22:29:07.142048   10530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:29:07.143327   10530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:29:07.144805   10530 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:29:07.174434   10530 out.go:179] * Using the kvm2 driver based on user configuration
	I0926 22:29:07.175943   10530 start.go:304] selected driver: kvm2
	I0926 22:29:07.175964   10530 start.go:924] validating driver "kvm2" against <nil>
	I0926 22:29:07.175981   10530 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:29:07.176689   10530 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 22:29:07.176795   10530 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 22:29:07.190390   10530 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 22:29:07.190423   10530 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 22:29:07.204480   10530 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 22:29:07.204525   10530 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 22:29:07.204841   10530 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 22:29:07.204881   10530 cni.go:84] Creating CNI manager for ""
	I0926 22:29:07.204938   10530 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 22:29:07.204949   10530 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 22:29:07.205010   10530 start.go:348] cluster config:
	{Name:addons-330674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-330674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:29:07.205117   10530 iso.go:125] acquiring lock: {Name:mk665cb8117fd96bfc46b1e5a29611848cf59d97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 22:29:07.206957   10530 out.go:179] * Starting "addons-330674" primary control-plane node in "addons-330674" cluster
	I0926 22:29:07.208231   10530 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 22:29:07.208282   10530 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0926 22:29:07.208298   10530 cache.go:58] Caching tarball of preloaded images
	I0926 22:29:07.208403   10530 preload.go:172] Found /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0926 22:29:07.208418   10530 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0926 22:29:07.208880   10530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/config.json ...
	I0926 22:29:07.208921   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/config.json: {Name:mk7658ee06b88bc4bb74708f21dcb24d049f1fa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:07.209105   10530 start.go:360] acquireMachinesLock for addons-330674: {Name:mk2abc374bcfc09d0b998f1b70bb443182c23d46 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 22:29:07.209167   10530 start.go:364] duration metric: took 45.106µs to acquireMachinesLock for "addons-330674"
	I0926 22:29:07.209187   10530 start.go:93] Provisioning new machine with config: &{Name:addons-330674 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:addons-330674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 22:29:07.209253   10530 start.go:125] createHost starting for "" (driver="kvm2")
	I0926 22:29:07.210855   10530 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0926 22:29:07.210999   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:07.211043   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:07.224060   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39749
	I0926 22:29:07.224551   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:07.225094   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:07.225117   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:07.225449   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:07.225645   10530 main.go:141] libmachine: (addons-330674) Calling .GetMachineName
	I0926 22:29:07.225795   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:07.225959   10530 start.go:159] libmachine.API.Create for "addons-330674" (driver="kvm2")
	I0926 22:29:07.225987   10530 client.go:168] LocalClient.Create starting
	I0926 22:29:07.226026   10530 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem
	I0926 22:29:07.252167   10530 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem
	I0926 22:29:07.383695   10530 main.go:141] libmachine: Running pre-create checks...
	I0926 22:29:07.383717   10530 main.go:141] libmachine: (addons-330674) Calling .PreCreateCheck
	I0926 22:29:07.384236   10530 main.go:141] libmachine: (addons-330674) Calling .GetConfigRaw
	I0926 22:29:07.384647   10530 main.go:141] libmachine: Creating machine...
	I0926 22:29:07.384660   10530 main.go:141] libmachine: (addons-330674) Calling .Create
	I0926 22:29:07.384806   10530 main.go:141] libmachine: (addons-330674) creating domain...
	I0926 22:29:07.384837   10530 main.go:141] libmachine: (addons-330674) creating network...
	I0926 22:29:07.386337   10530 main.go:141] libmachine: (addons-330674) DBG | found existing default network
	I0926 22:29:07.386536   10530 main.go:141] libmachine: (addons-330674) DBG | <network>
	I0926 22:29:07.386551   10530 main.go:141] libmachine: (addons-330674) DBG |   <name>default</name>
	I0926 22:29:07.386561   10530 main.go:141] libmachine: (addons-330674) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0926 22:29:07.386567   10530 main.go:141] libmachine: (addons-330674) DBG |   <forward mode='nat'>
	I0926 22:29:07.386576   10530 main.go:141] libmachine: (addons-330674) DBG |     <nat>
	I0926 22:29:07.386584   10530 main.go:141] libmachine: (addons-330674) DBG |       <port start='1024' end='65535'/>
	I0926 22:29:07.386593   10530 main.go:141] libmachine: (addons-330674) DBG |     </nat>
	I0926 22:29:07.386600   10530 main.go:141] libmachine: (addons-330674) DBG |   </forward>
	I0926 22:29:07.386609   10530 main.go:141] libmachine: (addons-330674) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0926 22:29:07.386624   10530 main.go:141] libmachine: (addons-330674) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0926 22:29:07.386674   10530 main.go:141] libmachine: (addons-330674) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0926 22:29:07.386695   10530 main.go:141] libmachine: (addons-330674) DBG |     <dhcp>
	I0926 22:29:07.386722   10530 main.go:141] libmachine: (addons-330674) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0926 22:29:07.386749   10530 main.go:141] libmachine: (addons-330674) DBG |     </dhcp>
	I0926 22:29:07.386765   10530 main.go:141] libmachine: (addons-330674) DBG |   </ip>
	I0926 22:29:07.386773   10530 main.go:141] libmachine: (addons-330674) DBG | </network>
	I0926 22:29:07.386781   10530 main.go:141] libmachine: (addons-330674) DBG | 
	I0926 22:29:07.387226   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:07.387079   10558 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000136b0}
	I0926 22:29:07.387252   10530 main.go:141] libmachine: (addons-330674) DBG | defining private network:
	I0926 22:29:07.387264   10530 main.go:141] libmachine: (addons-330674) DBG | 
	I0926 22:29:07.387271   10530 main.go:141] libmachine: (addons-330674) DBG | <network>
	I0926 22:29:07.387280   10530 main.go:141] libmachine: (addons-330674) DBG |   <name>mk-addons-330674</name>
	I0926 22:29:07.387287   10530 main.go:141] libmachine: (addons-330674) DBG |   <dns enable='no'/>
	I0926 22:29:07.387305   10530 main.go:141] libmachine: (addons-330674) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0926 22:29:07.387341   10530 main.go:141] libmachine: (addons-330674) DBG |     <dhcp>
	I0926 22:29:07.387364   10530 main.go:141] libmachine: (addons-330674) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0926 22:29:07.387386   10530 main.go:141] libmachine: (addons-330674) DBG |     </dhcp>
	I0926 22:29:07.387410   10530 main.go:141] libmachine: (addons-330674) DBG |   </ip>
	I0926 22:29:07.387419   10530 main.go:141] libmachine: (addons-330674) DBG | </network>
	I0926 22:29:07.387423   10530 main.go:141] libmachine: (addons-330674) DBG | 
	I0926 22:29:07.393131   10530 main.go:141] libmachine: (addons-330674) DBG | creating private network mk-addons-330674 192.168.39.0/24...
	I0926 22:29:07.460176   10530 main.go:141] libmachine: (addons-330674) DBG | private network mk-addons-330674 192.168.39.0/24 created
	I0926 22:29:07.460404   10530 main.go:141] libmachine: (addons-330674) DBG | <network>
	I0926 22:29:07.460423   10530 main.go:141] libmachine: (addons-330674) DBG |   <name>mk-addons-330674</name>
	I0926 22:29:07.460433   10530 main.go:141] libmachine: (addons-330674) setting up store path in /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674 ...
	I0926 22:29:07.460457   10530 main.go:141] libmachine: (addons-330674) DBG |   <uuid>e70fd5af-70d4-4d49-913b-79a95d8fca9c</uuid>
	I0926 22:29:07.460472   10530 main.go:141] libmachine: (addons-330674) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I0926 22:29:07.460480   10530 main.go:141] libmachine: (addons-330674) DBG |   <mac address='52:54:00:a6:90:55'/>
	I0926 22:29:07.460493   10530 main.go:141] libmachine: (addons-330674) DBG |   <dns enable='no'/>
	I0926 22:29:07.460501   10530 main.go:141] libmachine: (addons-330674) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0926 22:29:07.460527   10530 main.go:141] libmachine: (addons-330674) building disk image from file:///home/jenkins/minikube-integration/21642-6020/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0926 22:29:07.460539   10530 main.go:141] libmachine: (addons-330674) DBG |     <dhcp>
	I0926 22:29:07.460549   10530 main.go:141] libmachine: (addons-330674) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0926 22:29:07.460556   10530 main.go:141] libmachine: (addons-330674) DBG |     </dhcp>
	I0926 22:29:07.460567   10530 main.go:141] libmachine: (addons-330674) DBG |   </ip>
	I0926 22:29:07.460574   10530 main.go:141] libmachine: (addons-330674) DBG | </network>
	I0926 22:29:07.460593   10530 main.go:141] libmachine: (addons-330674) Downloading /home/jenkins/minikube-integration/21642-6020/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21642-6020/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I0926 22:29:07.460625   10530 main.go:141] libmachine: (addons-330674) DBG | 
	I0926 22:29:07.460644   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:07.460403   10558 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 22:29:07.709924   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:07.709791   10558 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa...
	I0926 22:29:08.463909   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:08.463682   10558 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/addons-330674.rawdisk...
	I0926 22:29:08.463957   10530 main.go:141] libmachine: (addons-330674) setting executable bit set on /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674 (perms=drwx------)
	I0926 22:29:08.463983   10530 main.go:141] libmachine: (addons-330674) DBG | Writing magic tar header
	I0926 22:29:08.463998   10530 main.go:141] libmachine: (addons-330674) DBG | Writing SSH key tar header
	I0926 22:29:08.464006   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:08.463801   10558 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674 ...
	I0926 22:29:08.464019   10530 main.go:141] libmachine: (addons-330674) setting executable bit set on /home/jenkins/minikube-integration/21642-6020/.minikube/machines (perms=drwxr-xr-x)
	I0926 22:29:08.464034   10530 main.go:141] libmachine: (addons-330674) setting executable bit set on /home/jenkins/minikube-integration/21642-6020/.minikube (perms=drwxr-xr-x)
	I0926 22:29:08.464052   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674
	I0926 22:29:08.464064   10530 main.go:141] libmachine: (addons-330674) setting executable bit set on /home/jenkins/minikube-integration/21642-6020 (perms=drwxrwxr-x)
	I0926 22:29:08.464074   10530 main.go:141] libmachine: (addons-330674) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0926 22:29:08.464080   10530 main.go:141] libmachine: (addons-330674) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0926 22:29:08.464099   10530 main.go:141] libmachine: (addons-330674) defining domain...
	I0926 22:29:08.464155   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21642-6020/.minikube/machines
	I0926 22:29:08.464176   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 22:29:08.464184   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21642-6020
	I0926 22:29:08.464190   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0926 22:29:08.464208   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home/jenkins
	I0926 22:29:08.464242   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home
	I0926 22:29:08.464263   10530 main.go:141] libmachine: (addons-330674) DBG | skipping /home - not owner
	I0926 22:29:08.465374   10530 main.go:141] libmachine: (addons-330674) defining domain using XML: 
	I0926 22:29:08.465403   10530 main.go:141] libmachine: (addons-330674) <domain type='kvm'>
	I0926 22:29:08.465410   10530 main.go:141] libmachine: (addons-330674)   <name>addons-330674</name>
	I0926 22:29:08.465415   10530 main.go:141] libmachine: (addons-330674)   <memory unit='MiB'>4096</memory>
	I0926 22:29:08.465420   10530 main.go:141] libmachine: (addons-330674)   <vcpu>2</vcpu>
	I0926 22:29:08.465424   10530 main.go:141] libmachine: (addons-330674)   <features>
	I0926 22:29:08.465428   10530 main.go:141] libmachine: (addons-330674)     <acpi/>
	I0926 22:29:08.465432   10530 main.go:141] libmachine: (addons-330674)     <apic/>
	I0926 22:29:08.465438   10530 main.go:141] libmachine: (addons-330674)     <pae/>
	I0926 22:29:08.465444   10530 main.go:141] libmachine: (addons-330674)   </features>
	I0926 22:29:08.465449   10530 main.go:141] libmachine: (addons-330674)   <cpu mode='host-passthrough'>
	I0926 22:29:08.465453   10530 main.go:141] libmachine: (addons-330674)   </cpu>
	I0926 22:29:08.465458   10530 main.go:141] libmachine: (addons-330674)   <os>
	I0926 22:29:08.465462   10530 main.go:141] libmachine: (addons-330674)     <type>hvm</type>
	I0926 22:29:08.465467   10530 main.go:141] libmachine: (addons-330674)     <boot dev='cdrom'/>
	I0926 22:29:08.465471   10530 main.go:141] libmachine: (addons-330674)     <boot dev='hd'/>
	I0926 22:29:08.465481   10530 main.go:141] libmachine: (addons-330674)     <bootmenu enable='no'/>
	I0926 22:29:08.465491   10530 main.go:141] libmachine: (addons-330674)   </os>
	I0926 22:29:08.465499   10530 main.go:141] libmachine: (addons-330674)   <devices>
	I0926 22:29:08.465506   10530 main.go:141] libmachine: (addons-330674)     <disk type='file' device='cdrom'>
	I0926 22:29:08.465541   10530 main.go:141] libmachine: (addons-330674)       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/boot2docker.iso'/>
	I0926 22:29:08.465556   10530 main.go:141] libmachine: (addons-330674)       <target dev='hdc' bus='scsi'/>
	I0926 22:29:08.465565   10530 main.go:141] libmachine: (addons-330674)       <readonly/>
	I0926 22:29:08.465571   10530 main.go:141] libmachine: (addons-330674)     </disk>
	I0926 22:29:08.465580   10530 main.go:141] libmachine: (addons-330674)     <disk type='file' device='disk'>
	I0926 22:29:08.465592   10530 main.go:141] libmachine: (addons-330674)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0926 22:29:08.465600   10530 main.go:141] libmachine: (addons-330674)       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/addons-330674.rawdisk'/>
	I0926 22:29:08.465607   10530 main.go:141] libmachine: (addons-330674)       <target dev='hda' bus='virtio'/>
	I0926 22:29:08.465612   10530 main.go:141] libmachine: (addons-330674)     </disk>
	I0926 22:29:08.465616   10530 main.go:141] libmachine: (addons-330674)     <interface type='network'>
	I0926 22:29:08.465624   10530 main.go:141] libmachine: (addons-330674)       <source network='mk-addons-330674'/>
	I0926 22:29:08.465630   10530 main.go:141] libmachine: (addons-330674)       <model type='virtio'/>
	I0926 22:29:08.465639   10530 main.go:141] libmachine: (addons-330674)     </interface>
	I0926 22:29:08.465648   10530 main.go:141] libmachine: (addons-330674)     <interface type='network'>
	I0926 22:29:08.465665   10530 main.go:141] libmachine: (addons-330674)       <source network='default'/>
	I0926 22:29:08.465676   10530 main.go:141] libmachine: (addons-330674)       <model type='virtio'/>
	I0926 22:29:08.465681   10530 main.go:141] libmachine: (addons-330674)     </interface>
	I0926 22:29:08.465685   10530 main.go:141] libmachine: (addons-330674)     <serial type='pty'>
	I0926 22:29:08.465690   10530 main.go:141] libmachine: (addons-330674)       <target port='0'/>
	I0926 22:29:08.465696   10530 main.go:141] libmachine: (addons-330674)     </serial>
	I0926 22:29:08.465706   10530 main.go:141] libmachine: (addons-330674)     <console type='pty'>
	I0926 22:29:08.465714   10530 main.go:141] libmachine: (addons-330674)       <target type='serial' port='0'/>
	I0926 22:29:08.465740   10530 main.go:141] libmachine: (addons-330674)     </console>
	I0926 22:29:08.465754   10530 main.go:141] libmachine: (addons-330674)     <rng model='virtio'>
	I0926 22:29:08.465774   10530 main.go:141] libmachine: (addons-330674)       <backend model='random'>/dev/random</backend>
	I0926 22:29:08.465783   10530 main.go:141] libmachine: (addons-330674)     </rng>
	I0926 22:29:08.465790   10530 main.go:141] libmachine: (addons-330674)   </devices>
	I0926 22:29:08.465796   10530 main.go:141] libmachine: (addons-330674) </domain>
	I0926 22:29:08.465805   10530 main.go:141] libmachine: (addons-330674) 
	I0926 22:29:08.473977   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:84:c4:98 in network default
	I0926 22:29:08.474678   10530 main.go:141] libmachine: (addons-330674) starting domain...
	I0926 22:29:08.474698   10530 main.go:141] libmachine: (addons-330674) ensuring networks are active...
	I0926 22:29:08.474707   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:08.475451   10530 main.go:141] libmachine: (addons-330674) Ensuring network default is active
	I0926 22:29:08.475817   10530 main.go:141] libmachine: (addons-330674) Ensuring network mk-addons-330674 is active
	I0926 22:29:08.476435   10530 main.go:141] libmachine: (addons-330674) getting domain XML...
	I0926 22:29:08.477581   10530 main.go:141] libmachine: (addons-330674) DBG | starting domain XML:
	I0926 22:29:08.477607   10530 main.go:141] libmachine: (addons-330674) DBG | <domain type='kvm'>
	I0926 22:29:08.477626   10530 main.go:141] libmachine: (addons-330674) DBG |   <name>addons-330674</name>
	I0926 22:29:08.477633   10530 main.go:141] libmachine: (addons-330674) DBG |   <uuid>0270d5ce-774d-47cc-84b7-b73291b9eb86</uuid>
	I0926 22:29:08.477643   10530 main.go:141] libmachine: (addons-330674) DBG |   <memory unit='KiB'>4194304</memory>
	I0926 22:29:08.477648   10530 main.go:141] libmachine: (addons-330674) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I0926 22:29:08.477654   10530 main.go:141] libmachine: (addons-330674) DBG |   <vcpu placement='static'>2</vcpu>
	I0926 22:29:08.477661   10530 main.go:141] libmachine: (addons-330674) DBG |   <os>
	I0926 22:29:08.477680   10530 main.go:141] libmachine: (addons-330674) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0926 22:29:08.477689   10530 main.go:141] libmachine: (addons-330674) DBG |     <boot dev='cdrom'/>
	I0926 22:29:08.477699   10530 main.go:141] libmachine: (addons-330674) DBG |     <boot dev='hd'/>
	I0926 22:29:08.477710   10530 main.go:141] libmachine: (addons-330674) DBG |     <bootmenu enable='no'/>
	I0926 22:29:08.477719   10530 main.go:141] libmachine: (addons-330674) DBG |   </os>
	I0926 22:29:08.477724   10530 main.go:141] libmachine: (addons-330674) DBG |   <features>
	I0926 22:29:08.477729   10530 main.go:141] libmachine: (addons-330674) DBG |     <acpi/>
	I0926 22:29:08.477735   10530 main.go:141] libmachine: (addons-330674) DBG |     <apic/>
	I0926 22:29:08.477740   10530 main.go:141] libmachine: (addons-330674) DBG |     <pae/>
	I0926 22:29:08.477744   10530 main.go:141] libmachine: (addons-330674) DBG |   </features>
	I0926 22:29:08.477753   10530 main.go:141] libmachine: (addons-330674) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0926 22:29:08.477770   10530 main.go:141] libmachine: (addons-330674) DBG |   <clock offset='utc'/>
	I0926 22:29:08.477812   10530 main.go:141] libmachine: (addons-330674) DBG |   <on_poweroff>destroy</on_poweroff>
	I0926 22:29:08.477847   10530 main.go:141] libmachine: (addons-330674) DBG |   <on_reboot>restart</on_reboot>
	I0926 22:29:08.477862   10530 main.go:141] libmachine: (addons-330674) DBG |   <on_crash>destroy</on_crash>
	I0926 22:29:08.477872   10530 main.go:141] libmachine: (addons-330674) DBG |   <devices>
	I0926 22:29:08.477883   10530 main.go:141] libmachine: (addons-330674) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0926 22:29:08.477893   10530 main.go:141] libmachine: (addons-330674) DBG |     <disk type='file' device='cdrom'>
	I0926 22:29:08.477901   10530 main.go:141] libmachine: (addons-330674) DBG |       <driver name='qemu' type='raw'/>
	I0926 22:29:08.477910   10530 main.go:141] libmachine: (addons-330674) DBG |       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/boot2docker.iso'/>
	I0926 22:29:08.477939   10530 main.go:141] libmachine: (addons-330674) DBG |       <target dev='hdc' bus='scsi'/>
	I0926 22:29:08.477962   10530 main.go:141] libmachine: (addons-330674) DBG |       <readonly/>
	I0926 22:29:08.477976   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0926 22:29:08.477987   10530 main.go:141] libmachine: (addons-330674) DBG |     </disk>
	I0926 22:29:08.477997   10530 main.go:141] libmachine: (addons-330674) DBG |     <disk type='file' device='disk'>
	I0926 22:29:08.478009   10530 main.go:141] libmachine: (addons-330674) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0926 22:29:08.478027   10530 main.go:141] libmachine: (addons-330674) DBG |       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/addons-330674.rawdisk'/>
	I0926 22:29:08.478038   10530 main.go:141] libmachine: (addons-330674) DBG |       <target dev='hda' bus='virtio'/>
	I0926 22:29:08.478054   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0926 22:29:08.478064   10530 main.go:141] libmachine: (addons-330674) DBG |     </disk>
	I0926 22:29:08.478085   10530 main.go:141] libmachine: (addons-330674) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0926 22:29:08.478104   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0926 22:29:08.478118   10530 main.go:141] libmachine: (addons-330674) DBG |     </controller>
	I0926 22:29:08.478135   10530 main.go:141] libmachine: (addons-330674) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0926 22:29:08.478148   10530 main.go:141] libmachine: (addons-330674) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0926 22:29:08.478167   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0926 22:29:08.478178   10530 main.go:141] libmachine: (addons-330674) DBG |     </controller>
	I0926 22:29:08.478195   10530 main.go:141] libmachine: (addons-330674) DBG |     <interface type='network'>
	I0926 22:29:08.478213   10530 main.go:141] libmachine: (addons-330674) DBG |       <mac address='52:54:00:fe:3c:4a'/>
	I0926 22:29:08.478223   10530 main.go:141] libmachine: (addons-330674) DBG |       <source network='mk-addons-330674'/>
	I0926 22:29:08.478233   10530 main.go:141] libmachine: (addons-330674) DBG |       <model type='virtio'/>
	I0926 22:29:08.478243   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0926 22:29:08.478252   10530 main.go:141] libmachine: (addons-330674) DBG |     </interface>
	I0926 22:29:08.478264   10530 main.go:141] libmachine: (addons-330674) DBG |     <interface type='network'>
	I0926 22:29:08.478275   10530 main.go:141] libmachine: (addons-330674) DBG |       <mac address='52:54:00:84:c4:98'/>
	I0926 22:29:08.478286   10530 main.go:141] libmachine: (addons-330674) DBG |       <source network='default'/>
	I0926 22:29:08.478308   10530 main.go:141] libmachine: (addons-330674) DBG |       <model type='virtio'/>
	I0926 22:29:08.478322   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0926 22:29:08.478330   10530 main.go:141] libmachine: (addons-330674) DBG |     </interface>
	I0926 22:29:08.478350   10530 main.go:141] libmachine: (addons-330674) DBG |     <serial type='pty'>
	I0926 22:29:08.478362   10530 main.go:141] libmachine: (addons-330674) DBG |       <target type='isa-serial' port='0'>
	I0926 22:29:08.478459   10530 main.go:141] libmachine: (addons-330674) DBG |         <model name='isa-serial'/>
	I0926 22:29:08.478491   10530 main.go:141] libmachine: (addons-330674) DBG |       </target>
	I0926 22:29:08.478512   10530 main.go:141] libmachine: (addons-330674) DBG |     </serial>
	I0926 22:29:08.478522   10530 main.go:141] libmachine: (addons-330674) DBG |     <console type='pty'>
	I0926 22:29:08.478537   10530 main.go:141] libmachine: (addons-330674) DBG |       <target type='serial' port='0'/>
	I0926 22:29:08.478548   10530 main.go:141] libmachine: (addons-330674) DBG |     </console>
	I0926 22:29:08.478564   10530 main.go:141] libmachine: (addons-330674) DBG |     <input type='mouse' bus='ps2'/>
	I0926 22:29:08.478581   10530 main.go:141] libmachine: (addons-330674) DBG |     <input type='keyboard' bus='ps2'/>
	I0926 22:29:08.478595   10530 main.go:141] libmachine: (addons-330674) DBG |     <audio id='1' type='none'/>
	I0926 22:29:08.478607   10530 main.go:141] libmachine: (addons-330674) DBG |     <memballoon model='virtio'>
	I0926 22:29:08.478622   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0926 22:29:08.478634   10530 main.go:141] libmachine: (addons-330674) DBG |     </memballoon>
	I0926 22:29:08.478649   10530 main.go:141] libmachine: (addons-330674) DBG |     <rng model='virtio'>
	I0926 22:29:08.478659   10530 main.go:141] libmachine: (addons-330674) DBG |       <backend model='random'>/dev/random</backend>
	I0926 22:29:08.478667   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0926 22:29:08.478674   10530 main.go:141] libmachine: (addons-330674) DBG |     </rng>
	I0926 22:29:08.478679   10530 main.go:141] libmachine: (addons-330674) DBG |   </devices>
	I0926 22:29:08.478685   10530 main.go:141] libmachine: (addons-330674) DBG | </domain>
	I0926 22:29:08.478692   10530 main.go:141] libmachine: (addons-330674) DBG | 
	I0926 22:29:09.794414   10530 main.go:141] libmachine: (addons-330674) waiting for domain to start...
	I0926 22:29:09.795757   10530 main.go:141] libmachine: (addons-330674) domain is now running
	I0926 22:29:09.795779   10530 main.go:141] libmachine: (addons-330674) waiting for IP...
	I0926 22:29:09.796619   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:09.797072   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:09.797094   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:09.797358   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:09.797434   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:09.797363   10558 retry.go:31] will retry after 273.626577ms: waiting for domain to come up
	I0926 22:29:10.073299   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:10.073781   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:10.073821   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:10.074074   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:10.074127   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:10.074070   10558 retry.go:31] will retry after 328.642045ms: waiting for domain to come up
	I0926 22:29:10.404766   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:10.405330   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:10.405358   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:10.405650   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:10.405699   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:10.405633   10558 retry.go:31] will retry after 438.92032ms: waiting for domain to come up
	I0926 22:29:10.846204   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:10.846643   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:10.846672   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:10.846906   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:10.846933   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:10.846871   10558 retry.go:31] will retry after 558.153234ms: waiting for domain to come up
	I0926 22:29:11.406899   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:11.407422   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:11.407438   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:11.407834   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:11.407882   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:11.407800   10558 retry.go:31] will retry after 539.111569ms: waiting for domain to come up
	I0926 22:29:11.948608   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:11.949098   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:11.949119   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:11.949455   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:11.949481   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:11.949435   10558 retry.go:31] will retry after 832.890938ms: waiting for domain to come up
	I0926 22:29:12.784343   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:12.784868   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:12.784895   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:12.785122   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:12.785150   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:12.785094   10558 retry.go:31] will retry after 734.304778ms: waiting for domain to come up
	I0926 22:29:13.521093   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:13.521705   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:13.521742   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:13.521961   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:13.521985   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:13.521931   10558 retry.go:31] will retry after 1.380433504s: waiting for domain to come up
	I0926 22:29:14.904439   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:14.904924   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:14.904953   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:14.905190   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:14.905218   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:14.905169   10558 retry.go:31] will retry after 1.496759703s: waiting for domain to come up
	I0926 22:29:16.404048   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:16.404524   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:16.404544   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:16.404780   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:16.404815   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:16.404749   10558 retry.go:31] will retry after 2.080327572s: waiting for domain to come up
	I0926 22:29:18.486681   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:18.487121   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:18.487136   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:18.487537   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:18.487640   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:18.487542   10558 retry.go:31] will retry after 2.860875374s: waiting for domain to come up
	I0926 22:29:21.351807   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:21.352511   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:21.352546   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:21.352882   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:21.352912   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:21.352841   10558 retry.go:31] will retry after 3.24989466s: waiting for domain to come up
	I0926 22:29:24.605898   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:24.606496   10530 main.go:141] libmachine: (addons-330674) found domain IP: 192.168.39.36
	I0926 22:29:24.606514   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has current primary IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:24.606520   10530 main.go:141] libmachine: (addons-330674) reserving static IP address...
	I0926 22:29:24.607058   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find host DHCP lease matching {name: "addons-330674", mac: "52:54:00:fe:3c:4a", ip: "192.168.39.36"} in network mk-addons-330674
	I0926 22:29:24.801972   10530 main.go:141] libmachine: (addons-330674) DBG | Getting to WaitForSSH function...
	I0926 22:29:24.802012   10530 main.go:141] libmachine: (addons-330674) reserved static IP address 192.168.39.36 for domain addons-330674
	I0926 22:29:24.802021   10530 main.go:141] libmachine: (addons-330674) waiting for SSH...
	I0926 22:29:24.805483   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:24.805987   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:24.806013   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:24.806269   10530 main.go:141] libmachine: (addons-330674) DBG | Using SSH client type: external
	I0926 22:29:24.806295   10530 main.go:141] libmachine: (addons-330674) DBG | Using SSH private key: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa (-rw-------)
	I0926 22:29:24.806338   10530 main.go:141] libmachine: (addons-330674) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0926 22:29:24.806355   10530 main.go:141] libmachine: (addons-330674) DBG | About to run SSH command:
	I0926 22:29:24.806382   10530 main.go:141] libmachine: (addons-330674) DBG | exit 0
	I0926 22:29:24.945871   10530 main.go:141] libmachine: (addons-330674) DBG | SSH cmd err, output: <nil>: 
	I0926 22:29:24.946187   10530 main.go:141] libmachine: (addons-330674) domain creation complete
	I0926 22:29:24.946531   10530 main.go:141] libmachine: (addons-330674) Calling .GetConfigRaw
	I0926 22:29:24.947223   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:24.947466   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:24.947633   10530 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0926 22:29:24.947649   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:24.949328   10530 main.go:141] libmachine: Detecting operating system of created instance...
	I0926 22:29:24.949342   10530 main.go:141] libmachine: Waiting for SSH to be available...
	I0926 22:29:24.949347   10530 main.go:141] libmachine: Getting to WaitForSSH function...
	I0926 22:29:24.949352   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:24.952234   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:24.952698   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:24.952711   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:24.952971   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:24.953145   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:24.953333   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:24.953464   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:24.953611   10530 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:24.953903   10530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0926 22:29:24.953918   10530 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0926 22:29:25.060937   10530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 22:29:25.060966   10530 main.go:141] libmachine: Detecting the provisioner...
	I0926 22:29:25.060976   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:25.064297   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.064652   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.064684   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.064929   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:25.065163   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.065357   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.065558   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:25.065802   10530 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.066092   10530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0926 22:29:25.066109   10530 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0926 22:29:25.175605   10530 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0926 22:29:25.175676   10530 main.go:141] libmachine: found compatible host: buildroot
	I0926 22:29:25.175689   10530 main.go:141] libmachine: Provisioning with buildroot...
	I0926 22:29:25.175700   10530 main.go:141] libmachine: (addons-330674) Calling .GetMachineName
	I0926 22:29:25.175985   10530 buildroot.go:166] provisioning hostname "addons-330674"
	I0926 22:29:25.176011   10530 main.go:141] libmachine: (addons-330674) Calling .GetMachineName
	I0926 22:29:25.176150   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:25.179382   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.179854   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.179885   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.180043   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:25.180247   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.180432   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.180575   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:25.180767   10530 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.181010   10530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0926 22:29:25.181024   10530 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-330674 && echo "addons-330674" | sudo tee /etc/hostname
	I0926 22:29:25.307949   10530 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-330674
	
	I0926 22:29:25.307974   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:25.311584   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.312035   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.312067   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.312266   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:25.312427   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.312555   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.312671   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:25.312801   10530 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.313027   10530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0926 22:29:25.313044   10530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-330674' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-330674/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-330674' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 22:29:25.450755   10530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 22:29:25.450809   10530 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21642-6020/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-6020/.minikube}
	I0926 22:29:25.450872   10530 buildroot.go:174] setting up certificates
	I0926 22:29:25.450885   10530 provision.go:84] configureAuth start
	I0926 22:29:25.450905   10530 main.go:141] libmachine: (addons-330674) Calling .GetMachineName
	I0926 22:29:25.451192   10530 main.go:141] libmachine: (addons-330674) Calling .GetIP
	I0926 22:29:25.454688   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.455254   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.455279   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.455519   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:25.458753   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.459271   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.459303   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.459556   10530 provision.go:143] copyHostCerts
	I0926 22:29:25.459631   10530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem (1082 bytes)
	I0926 22:29:25.459785   10530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem (1123 bytes)
	I0926 22:29:25.459921   10530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem (1675 bytes)
	I0926 22:29:25.459995   10530 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem org=jenkins.addons-330674 san=[127.0.0.1 192.168.39.36 addons-330674 localhost minikube]
	I0926 22:29:25.636851   10530 provision.go:177] copyRemoteCerts
	I0926 22:29:25.636910   10530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 22:29:25.636931   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:25.640198   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.640611   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.640647   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.640899   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:25.641105   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.641276   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:25.641432   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:25.727740   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 22:29:25.759430   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 22:29:25.790642   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 22:29:25.824890   10530 provision.go:87] duration metric: took 373.989122ms to configureAuth
	I0926 22:29:25.824935   10530 buildroot.go:189] setting minikube options for container-runtime
	I0926 22:29:25.825088   10530 config.go:182] Loaded profile config "addons-330674": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:29:25.825156   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:25.828108   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.828481   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.828519   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.828682   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:25.828889   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.829082   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.829206   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:25.829377   10530 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.829561   10530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0926 22:29:25.829574   10530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0926 22:29:26.083637   10530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0926 22:29:26.083688   10530 main.go:141] libmachine: Checking connection to Docker...
	I0926 22:29:26.083699   10530 main.go:141] libmachine: (addons-330674) Calling .GetURL
	I0926 22:29:26.084980   10530 main.go:141] libmachine: (addons-330674) DBG | using libvirt version 8000000
	I0926 22:29:26.087617   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.088034   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.088058   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.088261   10530 main.go:141] libmachine: Docker is up and running!
	I0926 22:29:26.088277   10530 main.go:141] libmachine: Reticulating splines...
	I0926 22:29:26.088285   10530 client.go:171] duration metric: took 18.862290788s to LocalClient.Create
	I0926 22:29:26.088309   10530 start.go:167] duration metric: took 18.862351466s to libmachine.API.Create "addons-330674"
	I0926 22:29:26.088318   10530 start.go:293] postStartSetup for "addons-330674" (driver="kvm2")
	I0926 22:29:26.088328   10530 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 22:29:26.088344   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:26.088646   10530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 22:29:26.088676   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:26.091157   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.091558   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.091604   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.091759   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:26.091987   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:26.092140   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:26.092320   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:26.179094   10530 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 22:29:26.184339   10530 info.go:137] Remote host: Buildroot 2025.02
	I0926 22:29:26.184372   10530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-6020/.minikube/addons for local assets ...
	I0926 22:29:26.184463   10530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-6020/.minikube/files for local assets ...
	I0926 22:29:26.184504   10530 start.go:296] duration metric: took 96.180038ms for postStartSetup
	I0926 22:29:26.184545   10530 main.go:141] libmachine: (addons-330674) Calling .GetConfigRaw
	I0926 22:29:26.185197   10530 main.go:141] libmachine: (addons-330674) Calling .GetIP
	I0926 22:29:26.187971   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.188443   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.188476   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.188748   10530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/config.json ...
	I0926 22:29:26.188966   10530 start.go:128] duration metric: took 18.979703505s to createHost
	I0926 22:29:26.188989   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:26.191408   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.191793   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.191847   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.192051   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:26.192216   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:26.192328   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:26.192574   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:26.192739   10530 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:26.192982   10530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0926 22:29:26.192997   10530 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 22:29:26.302967   10530 main.go:141] libmachine: SSH cmd err, output: <nil>: 1758925766.258154674
	
	I0926 22:29:26.302991   10530 fix.go:216] guest clock: 1758925766.258154674
	I0926 22:29:26.302998   10530 fix.go:229] Guest: 2025-09-26 22:29:26.258154674 +0000 UTC Remote: 2025-09-26 22:29:26.188978954 +0000 UTC m=+19.093162175 (delta=69.17572ms)
	I0926 22:29:26.303017   10530 fix.go:200] guest clock delta is within tolerance: 69.17572ms
	I0926 22:29:26.303021   10530 start.go:83] releasing machines lock for "addons-330674", held for 19.093844163s
	I0926 22:29:26.303039   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:26.303314   10530 main.go:141] libmachine: (addons-330674) Calling .GetIP
	I0926 22:29:26.306248   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.306677   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.306699   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.306871   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:26.307420   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:26.307668   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:26.307796   10530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 22:29:26.307854   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:26.307908   10530 ssh_runner.go:195] Run: cat /version.json
	I0926 22:29:26.307928   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:26.311189   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.311234   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.311728   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.311762   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.311798   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.311816   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.312009   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:26.312028   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:26.312218   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:26.312225   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:26.312441   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:26.312444   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:26.312617   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:26.312624   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:26.424051   10530 ssh_runner.go:195] Run: systemctl --version
	I0926 22:29:26.430969   10530 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0926 22:29:26.610848   10530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 22:29:26.618574   10530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 22:29:26.618644   10530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 22:29:26.640335   10530 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 22:29:26.640361   10530 start.go:495] detecting cgroup driver to use...
	I0926 22:29:26.640424   10530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 22:29:26.662226   10530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 22:29:26.680146   10530 docker.go:218] disabling cri-docker service (if available) ...
	I0926 22:29:26.680210   10530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0926 22:29:26.699354   10530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0926 22:29:26.717303   10530 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0926 22:29:26.869422   10530 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0926 22:29:27.077850   10530 docker.go:234] disabling docker service ...
	I0926 22:29:27.077946   10530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0926 22:29:27.096325   10530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0926 22:29:27.112839   10530 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0926 22:29:27.280087   10530 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0926 22:29:27.428409   10530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 22:29:27.454379   10530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 22:29:27.481918   10530 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0926 22:29:27.481978   10530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.496018   10530 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0926 22:29:27.496545   10530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.511695   10530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.526954   10530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.542152   10530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 22:29:27.556957   10530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.570979   10530 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.593384   10530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.606999   10530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 22:29:27.619008   10530 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 22:29:27.619079   10530 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 22:29:27.643401   10530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 22:29:27.659682   10530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:27.806017   10530 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0926 22:29:27.921593   10530 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0926 22:29:27.921704   10530 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0926 22:29:27.927956   10530 start.go:563] Will wait 60s for crictl version
	I0926 22:29:27.928056   10530 ssh_runner.go:195] Run: which crictl
	I0926 22:29:27.932464   10530 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 22:29:27.976200   10530 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0926 22:29:27.976335   10530 ssh_runner.go:195] Run: crio --version
	I0926 22:29:28.008853   10530 ssh_runner.go:195] Run: crio --version
	I0926 22:29:28.043862   10530 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0926 22:29:28.045740   10530 main.go:141] libmachine: (addons-330674) Calling .GetIP
	I0926 22:29:28.048806   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:28.049367   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:28.049401   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:28.049696   10530 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0926 22:29:28.054603   10530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 22:29:28.071477   10530 kubeadm.go:883] updating cluster {Name:addons-330674 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-330
674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 22:29:28.071590   10530 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 22:29:28.071633   10530 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 22:29:28.118674   10530 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0926 22:29:28.118764   10530 ssh_runner.go:195] Run: which lz4
	I0926 22:29:28.123934   10530 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0926 22:29:28.129383   10530 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0926 22:29:28.129421   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0926 22:29:29.768442   10530 crio.go:462] duration metric: took 1.644542886s to copy over tarball
	I0926 22:29:29.768520   10530 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0926 22:29:31.498224   10530 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.729674115s)
	I0926 22:29:31.498261   10530 crio.go:469] duration metric: took 1.729788969s to extract the tarball
	I0926 22:29:31.498271   10530 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0926 22:29:31.542261   10530 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 22:29:31.589755   10530 crio.go:514] all images are preloaded for cri-o runtime.
	I0926 22:29:31.589778   10530 cache_images.go:85] Images are preloaded, skipping loading
	I0926 22:29:31.589786   10530 kubeadm.go:934] updating node { 192.168.39.36 8443 v1.34.0 crio true true} ...
	I0926 22:29:31.589917   10530 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-330674 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-330674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 22:29:31.590004   10530 ssh_runner.go:195] Run: crio config
	I0926 22:29:31.637842   10530 cni.go:84] Creating CNI manager for ""
	I0926 22:29:31.637869   10530 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 22:29:31.637886   10530 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 22:29:31.637913   10530 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.36 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-330674 NodeName:addons-330674 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 22:29:31.638060   10530 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-330674"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.36"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.36"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 22:29:31.638136   10530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0926 22:29:31.651088   10530 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 22:29:31.651173   10530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 22:29:31.664460   10530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0926 22:29:31.688820   10530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 22:29:31.711364   10530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0926 22:29:31.734280   10530 ssh_runner.go:195] Run: grep 192.168.39.36	control-plane.minikube.internal$ /etc/hosts
	I0926 22:29:31.738852   10530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 22:29:31.755229   10530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:31.902308   10530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 22:29:31.937034   10530 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674 for IP: 192.168.39.36
	I0926 22:29:31.937058   10530 certs.go:195] generating shared ca certs ...
	I0926 22:29:31.937074   10530 certs.go:227] acquiring lock for ca certs: {Name:mk9e164f84dd227cf84a459eec91beae2bb75a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:31.937207   10530 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key
	I0926 22:29:32.026590   10530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt ...
	I0926 22:29:32.026617   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt: {Name:mk1e3bf23e32e449f89f22a09284a0006a99cefd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.026782   10530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key ...
	I0926 22:29:32.026793   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key: {Name:mk5eaff0d17e330d6fd7ef6fcf7ad742525bef9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.026899   10530 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key
	I0926 22:29:32.787420   10530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.crt ...
	I0926 22:29:32.787450   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.crt: {Name:mk6c2cf5ab5d6decc42b76574fbbb2fa2a0d74f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.787609   10530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key ...
	I0926 22:29:32.787622   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key: {Name:mkbbce150377f831f3bce3eb30a4bb3f0e3a8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.787695   10530 certs.go:257] generating profile certs ...
	I0926 22:29:32.787750   10530 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.key
	I0926 22:29:32.787764   10530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt with IP's: []
	I0926 22:29:32.908998   10530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt ...
	I0926 22:29:32.909041   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: {Name:mk6078e9e1b406565a2c72ced7e3ab3a671f1de7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.909244   10530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.key ...
	I0926 22:29:32.909261   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.key: {Name:mkf3b0b0d969697c37ccf2b79cfe2d489e612622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.909377   10530 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.key.bda1d0ab
	I0926 22:29:32.909405   10530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.crt.bda1d0ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.36]
	I0926 22:29:33.576258   10530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.crt.bda1d0ab ...
	I0926 22:29:33.576288   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.crt.bda1d0ab: {Name:mk70a5fec9ce790e76bea656ec7f721eddde8def Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:33.576479   10530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.key.bda1d0ab ...
	I0926 22:29:33.576497   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.key.bda1d0ab: {Name:mkfc811bca2f58c6255301ef1bf7f7fc92f29309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:33.576622   10530 certs.go:382] copying /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.crt.bda1d0ab -> /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.crt
	I0926 22:29:33.576725   10530 certs.go:386] copying /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.key.bda1d0ab -> /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.key
	I0926 22:29:33.576779   10530 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.key
	I0926 22:29:33.576798   10530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.crt with IP's: []
	I0926 22:29:33.714042   10530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.crt ...
	I0926 22:29:33.714078   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.crt: {Name:mk2e196363dd00f5cf367b53bb1262ff8b58660e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:33.714261   10530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.key ...
	I0926 22:29:33.714278   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.key: {Name:mk6fa7164da45c401e6803ce35af819baa1796ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:33.714526   10530 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem (1679 bytes)
	I0926 22:29:33.714563   10530 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem (1082 bytes)
	I0926 22:29:33.714590   10530 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem (1123 bytes)
	I0926 22:29:33.714617   10530 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem (1675 bytes)
	I0926 22:29:33.715164   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 22:29:33.757024   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0926 22:29:33.801115   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 22:29:33.836953   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 22:29:33.869906   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0926 22:29:33.902538   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 22:29:33.933981   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 22:29:33.969510   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0926 22:29:34.000543   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 22:29:34.033373   10530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 22:29:34.056131   10530 ssh_runner.go:195] Run: openssl version
	I0926 22:29:34.062810   10530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 22:29:34.076566   10530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:34.082039   10530 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:34.082103   10530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:34.090282   10530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 22:29:34.104577   10530 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 22:29:34.110236   10530 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 22:29:34.110292   10530 kubeadm.go:400] StartCluster: {Name:addons-330674 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-330674
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:29:34.110386   10530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0926 22:29:34.110460   10530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0926 22:29:34.153972   10530 cri.go:89] found id: ""
	I0926 22:29:34.154038   10530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 22:29:34.166665   10530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 22:29:34.179555   10530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 22:29:34.192252   10530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 22:29:34.192272   10530 kubeadm.go:157] found existing configuration files:
	
	I0926 22:29:34.192315   10530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 22:29:34.204361   10530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 22:29:34.204419   10530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 22:29:34.216783   10530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 22:29:34.228359   10530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 22:29:34.228420   10530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 22:29:34.241418   10530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 22:29:34.253479   10530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 22:29:34.253551   10530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 22:29:34.266101   10530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 22:29:34.278300   10530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 22:29:34.278381   10530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 22:29:34.291142   10530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0926 22:29:34.464024   10530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 22:29:47.445637   10530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.0
	I0926 22:29:47.445747   10530 kubeadm.go:318] [preflight] Running pre-flight checks
	I0926 22:29:47.445868   10530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 22:29:47.445976   10530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 22:29:47.446109   10530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0926 22:29:47.446209   10530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 22:29:47.447948   10530 out.go:252]   - Generating certificates and keys ...
	I0926 22:29:47.448061   10530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I0926 22:29:47.448147   10530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I0926 22:29:47.448269   10530 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0926 22:29:47.448325   10530 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I0926 22:29:47.448386   10530 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I0926 22:29:47.448429   10530 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I0926 22:29:47.448504   10530 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I0926 22:29:47.448610   10530 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-330674 localhost] and IPs [192.168.39.36 127.0.0.1 ::1]
	I0926 22:29:47.448701   10530 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I0926 22:29:47.448884   10530 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-330674 localhost] and IPs [192.168.39.36 127.0.0.1 ::1]
	I0926 22:29:47.448982   10530 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0926 22:29:47.449075   10530 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I0926 22:29:47.449133   10530 kubeadm.go:318] [certs] Generating "sa" key and public key
	I0926 22:29:47.449183   10530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 22:29:47.449259   10530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 22:29:47.449346   10530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 22:29:47.449422   10530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 22:29:47.449517   10530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 22:29:47.449600   10530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 22:29:47.449705   10530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 22:29:47.449800   10530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 22:29:47.451527   10530 out.go:252]   - Booting up control plane ...
	I0926 22:29:47.451640   10530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 22:29:47.451715   10530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 22:29:47.451812   10530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 22:29:47.451951   10530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 22:29:47.452083   10530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0926 22:29:47.452213   10530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0926 22:29:47.452327   10530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 22:29:47.452402   10530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I0926 22:29:47.452577   10530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0926 22:29:47.452679   10530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0926 22:29:47.452730   10530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001754359s
	I0926 22:29:47.452819   10530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0926 22:29:47.452954   10530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.36:8443/livez
	I0926 22:29:47.453080   10530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0926 22:29:47.453186   10530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0926 22:29:47.453298   10530 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.979294458s
	I0926 22:29:47.453372   10530 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.933266488s
	I0926 22:29:47.453434   10530 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.002163771s
	I0926 22:29:47.453584   10530 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 22:29:47.453730   10530 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 22:29:47.453820   10530 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 22:29:47.454057   10530 kubeadm.go:318] [mark-control-plane] Marking the node addons-330674 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 22:29:47.454109   10530 kubeadm.go:318] [bootstrap-token] Using token: fhdqe8.jaemq9w7cxwr09ny
	I0926 22:29:47.456600   10530 out.go:252]   - Configuring RBAC rules ...
	I0926 22:29:47.456703   10530 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 22:29:47.456774   10530 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 22:29:47.456924   10530 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 22:29:47.457204   10530 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 22:29:47.457400   10530 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 22:29:47.457529   10530 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 22:29:47.457694   10530 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 22:29:47.457760   10530 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I0926 22:29:47.457852   10530 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I0926 22:29:47.457878   10530 kubeadm.go:318] 
	I0926 22:29:47.457966   10530 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I0926 22:29:47.457989   10530 kubeadm.go:318] 
	I0926 22:29:47.458096   10530 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I0926 22:29:47.458110   10530 kubeadm.go:318] 
	I0926 22:29:47.458158   10530 kubeadm.go:318]   mkdir -p $HOME/.kube
	I0926 22:29:47.458244   10530 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 22:29:47.458315   10530 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 22:29:47.458324   10530 kubeadm.go:318] 
	I0926 22:29:47.458397   10530 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I0926 22:29:47.458406   10530 kubeadm.go:318] 
	I0926 22:29:47.458474   10530 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 22:29:47.458483   10530 kubeadm.go:318] 
	I0926 22:29:47.458552   10530 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I0926 22:29:47.458681   10530 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 22:29:47.458813   10530 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 22:29:47.458841   10530 kubeadm.go:318] 
	I0926 22:29:47.458968   10530 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 22:29:47.459081   10530 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I0926 22:29:47.459092   10530 kubeadm.go:318] 
	I0926 22:29:47.459200   10530 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token fhdqe8.jaemq9w7cxwr09ny \
	I0926 22:29:47.459342   10530 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1bc065dc0287f5108511f75d77232285046ef3d632aca3b6b4eb77abcecaa58 \
	I0926 22:29:47.459397   10530 kubeadm.go:318] 	--control-plane 
	I0926 22:29:47.459414   10530 kubeadm.go:318] 
	I0926 22:29:47.459557   10530 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I0926 22:29:47.459575   10530 kubeadm.go:318] 
	I0926 22:29:47.459704   10530 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token fhdqe8.jaemq9w7cxwr09ny \
	I0926 22:29:47.459860   10530 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1bc065dc0287f5108511f75d77232285046ef3d632aca3b6b4eb77abcecaa58 
	I0926 22:29:47.459875   10530 cni.go:84] Creating CNI manager for ""
	I0926 22:29:47.459885   10530 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 22:29:47.462286   10530 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0926 22:29:47.463479   10530 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 22:29:47.480090   10530 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 22:29:47.505223   10530 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 22:29:47.505369   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:47.505369   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-330674 minikube.k8s.io/updated_at=2025_09_26T22_29_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47 minikube.k8s.io/name=addons-330674 minikube.k8s.io/primary=true
	I0926 22:29:47.547348   10530 ops.go:34] apiserver oom_adj: -16
	I0926 22:29:47.696459   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:48.197390   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:48.697112   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:49.197409   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:49.697305   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:50.196725   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:50.697377   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:51.197169   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:51.696547   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:52.197238   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:52.337983   10530 kubeadm.go:1113] duration metric: took 4.832674675s to wait for elevateKubeSystemPrivileges
	I0926 22:29:52.338028   10530 kubeadm.go:402] duration metric: took 18.227740002s to StartCluster
	I0926 22:29:52.338055   10530 settings.go:142] acquiring lock: {Name:mk8a46d5a99d51096f5a73696c8b5f570ce357f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:52.338211   10530 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 22:29:52.338922   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/kubeconfig: {Name:mkc92bf76d8ba21d0a2b0bb28107401b61549063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:52.339193   10530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0926 22:29:52.339222   10530 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 22:29:52.339287   10530 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0926 22:29:52.339397   10530 addons.go:69] Setting yakd=true in profile "addons-330674"
	I0926 22:29:52.339422   10530 addons.go:238] Setting addon yakd=true in "addons-330674"
	I0926 22:29:52.339438   10530 config.go:182] Loaded profile config "addons-330674": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:29:52.339450   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.339442   10530 addons.go:69] Setting inspektor-gadget=true in profile "addons-330674"
	I0926 22:29:52.339484   10530 addons.go:238] Setting addon inspektor-gadget=true in "addons-330674"
	I0926 22:29:52.339489   10530 addons.go:69] Setting registry-creds=true in profile "addons-330674"
	I0926 22:29:52.339500   10530 addons.go:238] Setting addon registry-creds=true in "addons-330674"
	I0926 22:29:52.339517   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.339530   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.339560   10530 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-330674"
	I0926 22:29:52.339588   10530 addons.go:69] Setting default-storageclass=true in profile "addons-330674"
	I0926 22:29:52.339641   10530 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-330674"
	I0926 22:29:52.339699   10530 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-330674"
	I0926 22:29:52.339712   10530 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-330674"
	I0926 22:29:52.339718   10530 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-330674"
	I0926 22:29:52.339748   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.339759   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.339933   10530 addons.go:69] Setting registry=true in profile "addons-330674"
	I0926 22:29:52.339940   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.339946   10530 addons.go:238] Setting addon registry=true in "addons-330674"
	I0926 22:29:52.339964   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.339980   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340110   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340158   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340197   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340205   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340206   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340225   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340231   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340240   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340291   10530 addons.go:69] Setting metrics-server=true in profile "addons-330674"
	I0926 22:29:52.340304   10530 addons.go:238] Setting addon metrics-server=true in "addons-330674"
	I0926 22:29:52.340326   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.340349   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340374   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340392   10530 addons.go:69] Setting cloud-spanner=true in profile "addons-330674"
	I0926 22:29:52.340443   10530 addons.go:238] Setting addon cloud-spanner=true in "addons-330674"
	I0926 22:29:52.340560   10530 addons.go:69] Setting volcano=true in profile "addons-330674"
	I0926 22:29:52.340574   10530 addons.go:238] Setting addon volcano=true in "addons-330674"
	I0926 22:29:52.340604   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.340716   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340742   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340788   10530 addons.go:69] Setting volumesnapshots=true in profile "addons-330674"
	I0926 22:29:52.340800   10530 addons.go:238] Setting addon volumesnapshots=true in "addons-330674"
	I0926 22:29:52.340924   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340944   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340986   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.341014   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.341238   10530 addons.go:69] Setting ingress=true in profile "addons-330674"
	I0926 22:29:52.341253   10530 addons.go:69] Setting storage-provisioner=true in profile "addons-330674"
	I0926 22:29:52.341266   10530 addons.go:238] Setting addon storage-provisioner=true in "addons-330674"
	I0926 22:29:52.341300   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.341348   10530 addons.go:238] Setting addon ingress=true in "addons-330674"
	I0926 22:29:52.341240   10530 addons.go:69] Setting ingress-dns=true in profile "addons-330674"
	I0926 22:29:52.341383   10530 addons.go:238] Setting addon ingress-dns=true in "addons-330674"
	I0926 22:29:52.341398   10530 addons.go:69] Setting gcp-auth=true in profile "addons-330674"
	I0926 22:29:52.341402   10530 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-330674"
	I0926 22:29:52.341416   10530 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-330674"
	I0926 22:29:52.341435   10530 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-330674"
	I0926 22:29:52.341449   10530 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-330674"
	I0926 22:29:52.341509   10530 mustload.go:65] Loading cluster: addons-330674
	I0926 22:29:52.341572   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.341666   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.342053   10530 config.go:182] Loaded profile config "addons-330674": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:29:52.342088   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.342112   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.342422   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.342457   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.342651   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.342763   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.342850   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.342877   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.343172   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.343225   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.343690   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.343759   10530 out.go:179] * Verifying Kubernetes components...
	I0926 22:29:52.345141   10530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:52.350321   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.350372   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.350322   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.350435   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.351572   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.351633   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.358613   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.358684   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.361429   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40113
	I0926 22:29:52.362310   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.363266   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.363291   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.363782   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.364414   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.364455   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.371191   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39567
	I0926 22:29:52.371861   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.372561   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.372652   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.375030   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.375692   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.375748   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.375980   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I0926 22:29:52.377892   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44149
	I0926 22:29:52.378610   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.379228   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.379277   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.380418   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.380730   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35473
	I0926 22:29:52.381210   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.381428   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.382129   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.382712   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.382732   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.383155   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.383734   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.383880   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.385421   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.386039   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.386056   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.386744   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.392554   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.392631   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.392957   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36869
	I0926 22:29:52.403136   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39041
	I0926 22:29:52.403357   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.404253   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.404397   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.404815   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.405017   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.406177   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37295
	I0926 22:29:52.407153   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.407267   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42091
	I0926 22:29:52.407493   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.408091   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.408111   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.408550   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.408710   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.408724   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.409166   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.409846   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.409891   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.410616   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.410655   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.410905   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45421
	I0926 22:29:52.411003   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37765
	I0926 22:29:52.411804   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.411878   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.413166   10530 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-330674"
	I0926 22:29:52.413212   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.413277   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.413290   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.413308   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.413357   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.413386   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40071
	I0926 22:29:52.413618   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.413655   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.413774   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.413937   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.413999   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.413926   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.414268   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46413
	I0926 22:29:52.414417   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.415084   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.415098   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.415167   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.415401   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0926 22:29:52.416240   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.416691   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.416706   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.416995   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.417284   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.417356   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.417400   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.417684   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.417702   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.417745   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.417759   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.417859   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.418249   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.418537   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.418587   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.418801   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.418846   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.418861   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.419363   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.419645   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.423740   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.424043   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43789
	I0926 22:29:52.424090   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.424576   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35581
	I0926 22:29:52.426334   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.426454   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.426607   10530 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0926 22:29:52.427471   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.427488   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.427592   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
	I0926 22:29:52.427968   10530 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0926 22:29:52.427972   10530 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0926 22:29:52.428007   10530 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0926 22:29:52.428043   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.428128   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.428207   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.428668   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.428709   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.429176   10530 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0926 22:29:52.429193   10530 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0926 22:29:52.429212   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.429849   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.430116   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.430384   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.430434   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.430456   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.430515   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.430528   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.430743   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.430835   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40085
	I0926 22:29:52.431092   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.432143   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.432185   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.432703   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42425
	I0926 22:29:52.433465   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.433715   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.434668   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.434685   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.434904   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.434924   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.435446   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.435463   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.435495   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39103
	I0926 22:29:52.435973   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.435917   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.437085   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.437270   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.437297   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.437337   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.437502   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.437868   10530 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0926 22:29:52.438175   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.438188   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.438549   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45599
	I0926 22:29:52.438682   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.440037   10530 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0926 22:29:52.440061   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0926 22:29:52.440079   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.442435   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.442474   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.442439   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.442677   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.443200   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.443252   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.444657   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.444806   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.446914   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.447017   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.447041   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.447190   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.447362   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.447543   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.447999   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.451596   10530 addons.go:238] Setting addon default-storageclass=true in "addons-330674"
	I0926 22:29:52.451644   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.452021   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.452143   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.452216   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I0926 22:29:52.452540   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.452557   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.454847   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.454885   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.454917   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.454957   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.455075   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.455146   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.458000   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.458096   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.458285   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.458304   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.458720   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.458993   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.459085   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.459239   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.463711   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.464398   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.465791   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36909
	I0926 22:29:52.466371   10530 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0926 22:29:52.466644   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0926 22:29:52.467050   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.467569   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0926 22:29:52.467743   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36549
	I0926 22:29:52.468044   10530 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0926 22:29:52.468068   10530 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0926 22:29:52.468090   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.468774   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.468790   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.469218   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.470226   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.470297   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0926 22:29:52.470392   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I0926 22:29:52.470551   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.471071   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.471372   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.472449   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.472652   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.472891   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.473131   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0926 22:29:52.473349   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.473363   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.473882   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.473998   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33037
	I0926 22:29:52.474193   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.474752   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.475909   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0926 22:29:52.477113   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.477095   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.477161   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.477185   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.478618   10530 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0926 22:29:52.480579   10530 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0926 22:29:52.480597   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0926 22:29:52.480661   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.480818   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.480951   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0926 22:29:52.481512   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34583
	I0926 22:29:52.481756   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.481991   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.482171   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.482530   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33609
	I0926 22:29:52.482811   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.483144   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I0926 22:29:52.483423   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.483436   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.483520   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.483963   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.484104   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.484127   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.484599   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.484633   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.484662   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.486038   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.486072   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I0926 22:29:52.486043   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.486184   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.486663   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.486711   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.486942   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.487245   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.487310   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.487460   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.487535   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.487597   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.487535   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.487638   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.487646   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.487661   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.487848   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.487893   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.488103   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.488104   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.488177   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.488361   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.488389   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0926 22:29:52.488389   10530 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0926 22:29:52.489656   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.490162   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.490179   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.490193   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40285
	I0926 22:29:52.490687   10530 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0926 22:29:52.490706   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0926 22:29:52.491492   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.491504   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.491505   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0926 22:29:52.491619   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.491782   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.492610   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.493077   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.493208   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.493847   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.494199   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.494517   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.494954   10530 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0926 22:29:52.495130   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0926 22:29:52.495634   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.496189   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.496270   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.496863   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.496884   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.497219   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.497702   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0926 22:29:52.497708   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.497731   10530 out.go:179]   - Using image docker.io/registry:3.0.0
	I0926 22:29:52.497749   10530 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0926 22:29:52.498358   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.498430   10530 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:29:52.498469   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.497975   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.498604   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:52.499305   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:52.498668   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32851
	I0926 22:29:52.499351   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.499605   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.499952   10530 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0926 22:29:52.499987   10530 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0926 22:29:52.500004   10530 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0926 22:29:52.500007   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.500012   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0926 22:29:52.500023   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.500044   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0926 22:29:52.500073   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:52.499594   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:52.500501   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:52.500514   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:52.500523   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:52.500129   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.500763   10530 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0926 22:29:52.501281   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:52.501333   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:52.501341   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:52.501398   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.501414   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.501434   10530 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 22:29:52.501736   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.501463   10530 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	W0926 22:29:52.501543   10530 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0926 22:29:52.502042   10530 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0926 22:29:52.502057   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0926 22:29:52.502060   10530 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:29:52.502073   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.502156   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0926 22:29:52.502598   10530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0926 22:29:52.502619   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.502406   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.502678   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.502927   10530 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 22:29:52.502977   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 22:29:52.503014   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.503203   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.503387   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.503874   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.504066   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.504321   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.504477   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.504534   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.505083   10530 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0926 22:29:52.505131   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0926 22:29:52.505159   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.505214   10530 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0926 22:29:52.505228   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0926 22:29:52.505243   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.510312   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.510352   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.510372   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.510540   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40237
	I0926 22:29:52.511814   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.512165   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.512498   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.512694   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.512861   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.512883   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.512940   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.513138   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.513164   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.513463   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.513728   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.513760   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.513773   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.513797   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.513819   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.513862   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.514034   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.514073   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.514293   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.514309   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.514314   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.514335   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.514397   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.514417   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.514466   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.514499   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.514543   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.514683   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.514718   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.514740   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.514803   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.514842   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.515096   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.515120   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.515158   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.515212   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.515314   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.515313   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.515369   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.515519   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.515528   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.515669   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.515801   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.515861   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.516001   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.516811   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.516866   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.517166   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.517319   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.517454   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.517568   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.518367   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.520659   10530 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0926 22:29:52.522403   10530 out.go:179]   - Using image docker.io/busybox:stable
	I0926 22:29:52.523881   10530 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0926 22:29:52.523952   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0926 22:29:52.523993   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.527749   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.528245   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.528291   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.528445   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.528629   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.528784   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.528962   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.529317   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41999
	I0926 22:29:52.529881   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.530387   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.530406   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.530792   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.530983   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.532944   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.533139   10530 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 22:29:52.533157   10530 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 22:29:52.533177   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.536538   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.537055   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.537081   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.537257   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.537421   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.537573   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.537707   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	W0926 22:29:52.825147   10530 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:42872->192.168.39.36:22: read: connection reset by peer
	I0926 22:29:52.825184   10530 retry.go:31] will retry after 357.513028ms: ssh: handshake failed: read tcp 192.168.39.1:42872->192.168.39.36:22: read: connection reset by peer
	I0926 22:29:53.505584   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0926 22:29:53.562710   10530 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:53.562734   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0926 22:29:53.582172   10530 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0926 22:29:53.582193   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0926 22:29:53.609139   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0926 22:29:53.610191   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0926 22:29:53.632195   10530 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0926 22:29:53.632223   10530 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0926 22:29:53.685673   10530 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0926 22:29:53.685699   10530 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0926 22:29:53.786927   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0926 22:29:53.923343   10530 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.584115291s)
	I0926 22:29:53.923377   10530 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.578070431s)
	I0926 22:29:53.923465   10530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 22:29:53.923530   10530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0926 22:29:53.925682   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0926 22:29:53.978909   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0926 22:29:53.978947   10530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0926 22:29:54.056339   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 22:29:54.057576   10530 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0926 22:29:54.057598   10530 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0926 22:29:54.073051   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 22:29:54.083110   10530 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0926 22:29:54.083142   10530 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0926 22:29:54.130144   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0926 22:29:54.233603   10530 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0926 22:29:54.233626   10530 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0926 22:29:54.338703   10530 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0926 22:29:54.338734   10530 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0926 22:29:54.348111   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:54.401871   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0926 22:29:54.427136   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0926 22:29:54.427192   10530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0926 22:29:54.477977   10530 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0926 22:29:54.478003   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0926 22:29:54.488580   10530 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0926 22:29:54.488611   10530 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0926 22:29:54.603741   10530 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0926 22:29:54.603771   10530 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0926 22:29:54.621914   10530 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0926 22:29:54.621950   10530 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0926 22:29:54.727962   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0926 22:29:54.727996   10530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0926 22:29:54.784061   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0926 22:29:54.798515   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0926 22:29:54.798543   10530 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0926 22:29:54.906797   10530 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0926 22:29:54.906820   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0926 22:29:54.981155   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0926 22:29:55.311775   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0926 22:29:55.311837   10530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0926 22:29:55.397530   10530 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:29:55.397562   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0926 22:29:55.499650   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0926 22:29:55.662109   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0926 22:29:55.662147   10530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0926 22:29:55.768710   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.263082445s)
	I0926 22:29:55.768768   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:55.768794   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:55.769138   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:55.769186   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:55.769194   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:55.769212   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:55.769221   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:55.769505   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:55.769523   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:55.769540   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:55.841693   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:29:56.282914   10530 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0926 22:29:56.282938   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0926 22:29:56.489532   10530 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0926 22:29:56.489560   10530 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0926 22:29:56.788009   10530 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0926 22:29:56.788039   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0926 22:29:57.398847   10530 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0926 22:29:57.398877   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0926 22:29:57.915310   10530 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0926 22:29:57.915334   10530 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0926 22:29:58.078605   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0926 22:29:58.892510   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.283337619s)
	I0926 22:29:58.892546   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.282325805s)
	I0926 22:29:58.892563   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.892578   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.892593   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.892605   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.892599   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.105645313s)
	I0926 22:29:58.892637   10530 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.969155254s)
	I0926 22:29:58.892657   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.892668   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.892696   10530 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.969142392s)
	I0926 22:29:58.892721   10530 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0926 22:29:58.892729   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.967021186s)
	I0926 22:29:58.892748   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.892757   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.892954   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.892998   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.893008   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.893017   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.893024   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.893125   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.893140   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.893142   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.893148   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.893155   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.893215   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.893234   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.893240   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.893247   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.893253   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.893298   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.893304   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.893311   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.893317   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.893610   10530 node_ready.go:35] waiting up to 6m0s for node "addons-330674" to be "Ready" ...
	I0926 22:29:58.895454   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.895477   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.895485   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.895492   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.895493   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.895517   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.895539   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.895560   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.895521   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.895572   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.895596   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.895613   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.929667   10530 node_ready.go:49] node "addons-330674" is "Ready"
	I0926 22:29:58.929709   10530 node_ready.go:38] duration metric: took 36.077495ms for node "addons-330674" to be "Ready" ...
	I0926 22:29:58.929729   10530 api_server.go:52] waiting for apiserver process to appear ...
	I0926 22:29:58.929805   10530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 22:29:59.313914   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.257543853s)
	I0926 22:29:59.313962   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.313971   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.313914   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.240823229s)
	I0926 22:29:59.314034   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.314042   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.314249   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.314274   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.314286   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.314294   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.314315   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:59.314350   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.314358   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.314365   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.314372   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.314617   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:59.314651   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.314659   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.314968   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:59.315002   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.315029   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.577160   10530 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-330674" context rescaled to 1 replicas
	I0926 22:29:59.623147   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.623171   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.623479   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:59.623536   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.623556   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.827523   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.697342297s)
	I0926 22:29:59.827568   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.827580   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.827837   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:59.827864   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.827879   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.827899   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.827910   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.828169   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.828186   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.961517   10530 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0926 22:29:59.961559   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:59.965295   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:59.965808   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:59.965856   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:59.966131   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:59.966338   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:59.966510   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:59.966670   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:30:00.039995   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:00.040024   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:00.040328   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:00.040346   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:00.125106   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.776957771s)
	W0926 22:30:00.125185   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:00.125204   10530 retry.go:31] will retry after 258.780744ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:00.324361   10530 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0926 22:30:00.385019   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:00.672566   10530 addons.go:238] Setting addon gcp-auth=true in "addons-330674"
	I0926 22:30:00.672636   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:30:00.673096   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:30:00.673137   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:30:00.687087   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
	I0926 22:30:00.687645   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:30:00.688187   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:30:00.688212   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:30:00.688516   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:30:00.689029   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:30:00.689057   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:30:00.702335   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36661
	I0926 22:30:00.702789   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:30:00.703222   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:30:00.703244   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:30:00.703562   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:30:00.703802   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:30:00.705815   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:30:00.706084   10530 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0926 22:30:00.706107   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:30:00.709280   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:30:00.709679   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:30:00.709711   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:30:00.709896   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:30:00.710096   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:30:00.710284   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:30:00.710443   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:30:02.404757   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.620648683s)
	I0926 22:30:02.404821   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.404859   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.404866   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.423678426s)
	I0926 22:30:02.404891   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.404914   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.404943   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.905250926s)
	I0926 22:30:02.404990   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.405013   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.405022   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.563299799s)
	W0926 22:30:02.405057   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0926 22:30:02.405082   10530 retry.go:31] will retry after 343.769978ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0926 22:30:02.405209   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.405221   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.405230   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.405237   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.405249   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.405258   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.405266   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.405272   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.405336   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:30:02.405337   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.405348   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.405356   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.405363   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.405530   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.405546   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.405653   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:30:02.405699   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.405707   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.405715   10530 addons.go:479] Verifying addon metrics-server=true in "addons-330674"
	I0926 22:30:02.405821   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:30:02.405866   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.405877   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.405883   10530 addons.go:479] Verifying addon registry=true in "addons-330674"
	I0926 22:30:02.407363   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.005456123s)
	I0926 22:30:02.407398   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.407407   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.407601   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.407618   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.407627   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.407635   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.407841   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.407857   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.407865   10530 addons.go:479] Verifying addon ingress=true in "addons-330674"
	I0926 22:30:02.408308   10530 out.go:179] * Verifying registry addon...
	I0926 22:30:02.408383   10530 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-330674 service yakd-dashboard -n yakd-dashboard
	
	I0926 22:30:02.409149   10530 out.go:179] * Verifying ingress addon...
	I0926 22:30:02.410466   10530 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0926 22:30:02.411443   10530 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0926 22:30:02.443532   10530 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0926 22:30:02.443553   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:02.450316   10530 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0926 22:30:02.450340   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:02.749736   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:30:02.955435   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:03.005505   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:03.290182   10530 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.360350718s)
	I0926 22:30:03.290221   10530 api_server.go:72] duration metric: took 10.950960949s to wait for apiserver process to appear ...
	I0926 22:30:03.290227   10530 api_server.go:88] waiting for apiserver healthz status ...
	I0926 22:30:03.290245   10530 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0926 22:30:03.291781   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.213122213s)
	I0926 22:30:03.291866   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:03.291892   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:03.292156   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:03.292173   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:03.292181   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:03.292189   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:03.292447   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:03.292465   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:03.292477   10530 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-330674"
	I0926 22:30:03.294391   10530 out.go:179] * Verifying csi-hostpath-driver addon...
	I0926 22:30:03.297053   10530 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0926 22:30:03.321731   10530 api_server.go:279] https://192.168.39.36:8443/healthz returned 200:
	ok
	I0926 22:30:03.330874   10530 api_server.go:141] control plane version: v1.34.0
	I0926 22:30:03.330909   10530 api_server.go:131] duration metric: took 40.674253ms to wait for apiserver health ...
	I0926 22:30:03.330920   10530 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 22:30:03.344023   10530 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0926 22:30:03.344056   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:03.403718   10530 system_pods.go:59] 20 kube-system pods found
	I0926 22:30:03.403767   10530 system_pods.go:61] "amd-gpu-device-plugin-cdb8s" [b42dc693-f8dc-488e-a6df-11603c5146c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:30:03.403775   10530 system_pods.go:61] "coredns-66bc5c9577-s7j79" [685dab00-8a34-4029-b32e-d39a08e61560] Running
	I0926 22:30:03.403782   10530 system_pods.go:61] "coredns-66bc5c9577-vcwdm" [6a3371fb-cab7-4a7e-8907-e11b45338ed0] Running
	I0926 22:30:03.403788   10530 system_pods.go:61] "csi-hostpath-attacher-0" [b261b610-5540-4a39-af53-0a988f5316a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0926 22:30:03.403793   10530 system_pods.go:61] "csi-hostpath-resizer-0" [cc7afc9a-219f-4080-9fba-b24d07fadc30] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0926 22:30:03.403801   10530 system_pods.go:61] "csi-hostpathplugin-mk92b" [98d7012b-de84-42ba-8ec1-3e1578c28cfd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0926 22:30:03.403805   10530 system_pods.go:61] "etcd-addons-330674" [1ada4ec6-135f-43be-bb60-af64ae2a0259] Running
	I0926 22:30:03.403809   10530 system_pods.go:61] "kube-apiserver-addons-330674" [85dd874b-a8d2-4a72-be1b-d09107cf46d1] Running
	I0926 22:30:03.403814   10530 system_pods.go:61] "kube-controller-manager-addons-330674" [e8c1d449-4682-421a-ac32-8cd0847bf13d] Running
	I0926 22:30:03.403839   10530 system_pods.go:61] "kube-ingress-dns-minikube" [d20fd4fa-1f62-423e-a836-f66893f73949] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0926 22:30:03.403855   10530 system_pods.go:61] "kube-proxy-lldr6" [e3500915-4e56-473c-8674-5ea502daaac6] Running
	I0926 22:30:03.403861   10530 system_pods.go:61] "kube-scheduler-addons-330674" [6f79c673-6fec-4e6d-a974-50991d63a4a3] Running
	I0926 22:30:03.403868   10530 system_pods.go:61] "metrics-server-85b7d694d7-lwlpp" [2b5d3bcf-5ffd-48cc-a6b5-c5c418e1348e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0926 22:30:03.403877   10530 system_pods.go:61] "nvidia-device-plugin-daemonset-8pbfv" [1929f235-8f94-4b86-ba34-fcdb88f8378b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:30:03.403885   10530 system_pods.go:61] "registry-66898fdd98-2t8mg" [c1b89f10-d5b6-445e-b282-034ab8eaa0ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0926 22:30:03.403892   10530 system_pods.go:61] "registry-creds-764b6fb674-hjbpz" [5f2c62bb-e38c-4e78-a9aa-995812c7d2ef] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0926 22:30:03.403899   10530 system_pods.go:61] "registry-proxy-2jz4s" [ad4c665f-afe2-4a63-95bb-447d8efe7a88] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 22:30:03.403905   10530 system_pods.go:61] "snapshot-controller-7d9fbc56b8-btkpl" [d9d7b772-8f8e-4095-aaa6-fc9b1d68c681] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:03.403911   10530 system_pods.go:61] "snapshot-controller-7d9fbc56b8-n4kkw" [86602a14-6de0-44fe-99ba-f64d79426345] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:03.403923   10530 system_pods.go:61] "storage-provisioner" [805513c7-5529-4f0e-bbe6-de0e474ba2ba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 22:30:03.403929   10530 system_pods.go:74] duration metric: took 73.003109ms to wait for pod list to return data ...
	I0926 22:30:03.403938   10530 default_sa.go:34] waiting for default service account to be created ...
	I0926 22:30:03.416293   10530 default_sa.go:45] found service account: "default"
	I0926 22:30:03.416322   10530 default_sa.go:55] duration metric: took 12.37763ms for default service account to be created ...
	I0926 22:30:03.416335   10530 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 22:30:03.420408   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:03.420640   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:03.435848   10530 system_pods.go:86] 20 kube-system pods found
	I0926 22:30:03.435885   10530 system_pods.go:89] "amd-gpu-device-plugin-cdb8s" [b42dc693-f8dc-488e-a6df-11603c5146c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:30:03.435896   10530 system_pods.go:89] "coredns-66bc5c9577-s7j79" [685dab00-8a34-4029-b32e-d39a08e61560] Running
	I0926 22:30:03.435903   10530 system_pods.go:89] "coredns-66bc5c9577-vcwdm" [6a3371fb-cab7-4a7e-8907-e11b45338ed0] Running
	I0926 22:30:03.435909   10530 system_pods.go:89] "csi-hostpath-attacher-0" [b261b610-5540-4a39-af53-0a988f5316a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0926 22:30:03.435920   10530 system_pods.go:89] "csi-hostpath-resizer-0" [cc7afc9a-219f-4080-9fba-b24d07fadc30] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0926 22:30:03.435926   10530 system_pods.go:89] "csi-hostpathplugin-mk92b" [98d7012b-de84-42ba-8ec1-3e1578c28cfd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0926 22:30:03.435933   10530 system_pods.go:89] "etcd-addons-330674" [1ada4ec6-135f-43be-bb60-af64ae2a0259] Running
	I0926 22:30:03.435938   10530 system_pods.go:89] "kube-apiserver-addons-330674" [85dd874b-a8d2-4a72-be1b-d09107cf46d1] Running
	I0926 22:30:03.435943   10530 system_pods.go:89] "kube-controller-manager-addons-330674" [e8c1d449-4682-421a-ac32-8cd0847bf13d] Running
	I0926 22:30:03.435948   10530 system_pods.go:89] "kube-ingress-dns-minikube" [d20fd4fa-1f62-423e-a836-f66893f73949] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0926 22:30:03.435961   10530 system_pods.go:89] "kube-proxy-lldr6" [e3500915-4e56-473c-8674-5ea502daaac6] Running
	I0926 22:30:03.435968   10530 system_pods.go:89] "kube-scheduler-addons-330674" [6f79c673-6fec-4e6d-a974-50991d63a4a3] Running
	I0926 22:30:03.435973   10530 system_pods.go:89] "metrics-server-85b7d694d7-lwlpp" [2b5d3bcf-5ffd-48cc-a6b5-c5c418e1348e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0926 22:30:03.435983   10530 system_pods.go:89] "nvidia-device-plugin-daemonset-8pbfv" [1929f235-8f94-4b86-ba34-fcdb88f8378b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:30:03.435990   10530 system_pods.go:89] "registry-66898fdd98-2t8mg" [c1b89f10-d5b6-445e-b282-034ab8eaa0ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0926 22:30:03.435995   10530 system_pods.go:89] "registry-creds-764b6fb674-hjbpz" [5f2c62bb-e38c-4e78-a9aa-995812c7d2ef] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0926 22:30:03.436004   10530 system_pods.go:89] "registry-proxy-2jz4s" [ad4c665f-afe2-4a63-95bb-447d8efe7a88] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 22:30:03.436011   10530 system_pods.go:89] "snapshot-controller-7d9fbc56b8-btkpl" [d9d7b772-8f8e-4095-aaa6-fc9b1d68c681] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:03.436030   10530 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n4kkw" [86602a14-6de0-44fe-99ba-f64d79426345] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:03.436040   10530 system_pods.go:89] "storage-provisioner" [805513c7-5529-4f0e-bbe6-de0e474ba2ba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 22:30:03.436051   10530 system_pods.go:126] duration metric: took 19.710312ms to wait for k8s-apps to be running ...
	I0926 22:30:03.436063   10530 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 22:30:03.436116   10530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 22:30:03.805385   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:03.933120   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:03.935740   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:04.103360   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.718280199s)
	W0926 22:30:04.103409   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:04.103441   10530 retry.go:31] will retry after 415.010612ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:04.103441   10530 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.397332098s)
	I0926 22:30:04.105638   10530 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:30:04.107144   10530 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0926 22:30:04.108740   10530 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0926 22:30:04.108757   10530 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0926 22:30:04.204504   10530 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0926 22:30:04.204558   10530 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0926 22:30:04.266226   10530 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0926 22:30:04.266270   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0926 22:30:04.318135   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:04.326300   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0926 22:30:04.425264   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:04.425430   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:04.519163   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:04.804743   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:04.918462   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:04.921343   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:05.305855   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:05.419096   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:05.420385   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:05.480378   10530 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.044238076s)
	I0926 22:30:05.480434   10530 system_svc.go:56] duration metric: took 2.044366858s WaitForService to wait for kubelet
	I0926 22:30:05.480445   10530 kubeadm.go:586] duration metric: took 13.141186204s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 22:30:05.480467   10530 node_conditions.go:102] verifying NodePressure condition ...
	I0926 22:30:05.480379   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.730593729s)
	I0926 22:30:05.480567   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:05.480587   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:05.480910   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:05.480930   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:05.480948   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:05.480958   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:05.481297   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:05.481319   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:05.481322   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:30:05.490128   10530 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 22:30:05.490159   10530 node_conditions.go:123] node cpu capacity is 2
	I0926 22:30:05.490173   10530 node_conditions.go:105] duration metric: took 9.698866ms to run NodePressure ...
	I0926 22:30:05.490188   10530 start.go:241] waiting for startup goroutines ...
	I0926 22:30:05.823251   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:05.995165   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:05.995238   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:06.168992   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.842648363s)
	I0926 22:30:06.169046   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:06.169088   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:06.169430   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:06.169452   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:06.169462   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:06.169470   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:06.169730   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:06.169745   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:06.169769   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:30:06.170927   10530 addons.go:479] Verifying addon gcp-auth=true in "addons-330674"
	I0926 22:30:06.172988   10530 out.go:179] * Verifying gcp-auth addon...
	I0926 22:30:06.174897   10530 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0926 22:30:06.212287   10530 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0926 22:30:06.212317   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:06.312659   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:06.419336   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:06.421545   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:06.682289   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:06.707555   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.188348588s)
	W0926 22:30:06.707615   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:06.707638   10530 retry.go:31] will retry after 690.015659ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:06.806300   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:06.928806   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:06.928935   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:07.182496   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:07.305123   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:07.398719   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:07.423608   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:07.424145   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:07.683323   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:07.805352   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:07.926676   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:07.926821   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:08.183118   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:08.305133   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:08.418514   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:08.420565   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:08.679221   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:08.802855   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:08.849509   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.450746787s)
	W0926 22:30:08.849558   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:08.849579   10530 retry.go:31] will retry after 720.875973ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:08.914397   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:08.916076   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:09.178734   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:09.301290   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:09.420684   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:09.421209   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:09.571363   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:09.684948   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:09.814626   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:09.920020   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:09.920521   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.184424   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:10.302867   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:10.415872   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.418972   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:10.681185   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:10.802134   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:10.816960   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.245551165s)
	W0926 22:30:10.817021   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:10.817043   10530 retry.go:31] will retry after 1.516018438s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:10.916672   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.920270   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.178990   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:11.306805   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:11.418242   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.419600   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:11.680889   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:11.804313   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:11.914838   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.918376   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:12.180561   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:12.301512   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:12.333663   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:12.415805   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:12.419363   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:12.682335   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:12.804222   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:12.918788   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:12.919995   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:13.180331   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:13.305340   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:13.415577   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:13.416349   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:13.683699   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:13.805707   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:13.813715   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.480003432s)
	W0926 22:30:13.813753   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:13.813774   10530 retry.go:31] will retry after 1.257586739s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:13.921625   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:13.925319   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:14.180615   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:14.305510   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:14.415983   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:14.416424   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:14.679635   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:14.807576   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:14.915558   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:14.917303   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:15.071517   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:15.181159   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:15.306945   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:15.418630   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:15.418800   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:15.679147   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:15.893712   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:15.916744   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:15.917096   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:16.185591   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:16.304040   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:16.326267   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.254707359s)
	W0926 22:30:16.326313   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:16.326336   10530 retry.go:31] will retry after 2.377890696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:16.416481   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:16.419518   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:16.681550   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:16.803052   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:16.918664   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:16.919009   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:17.182452   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:17.302075   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:17.413448   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:17.417362   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.047202   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:18.047385   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:18.047552   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.048184   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:18.179560   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:18.303903   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:18.418028   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.421419   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:18.680067   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:18.705254   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:18.801213   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:18.914739   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.917654   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:19.179344   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:19.303239   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:19.418321   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:19.418678   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:19.679164   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:19.806674   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:19.908858   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.203561998s)
	W0926 22:30:19.908904   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:19.908926   10530 retry.go:31] will retry after 4.32939773s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:19.917643   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:19.919920   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:20.581572   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:20.582550   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:20.583652   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:20.584766   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:20.679458   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:20.802582   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:20.916995   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:20.918666   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:21.180913   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:21.332135   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:21.417484   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:21.417798   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:21.679247   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:21.801601   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:21.921505   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:21.923595   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:22.206659   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:22.303078   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:22.415068   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:22.416432   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:22.682206   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:22.802352   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:22.916004   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:22.916426   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:23.178440   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:23.302488   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:23.416760   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:23.417074   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:23.678471   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:23.801463   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:23.914659   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:23.915754   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:24.183326   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:24.239507   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:24.305343   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:24.420822   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:24.422445   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:24.681588   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:24.803334   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:24.920591   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:24.921194   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.181354   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:25.300531   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:25.414416   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:25.415291   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.431734   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.19217319s)
	W0926 22:30:25.431806   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:25.431843   10530 retry.go:31] will retry after 4.927424107s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:25.679778   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:25.804725   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:25.917163   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.917189   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:26.181015   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:26.302673   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:26.415255   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:26.416011   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:26.932748   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:26.938776   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:26.939199   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:26.939659   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:27.179484   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:27.300382   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:27.413855   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:27.416495   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:27.679241   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:27.803067   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:27.915766   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:27.916504   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:28.179926   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:28.303820   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:28.417009   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:28.417362   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:28.680438   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:28.803693   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:28.913738   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:28.917580   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:29.183260   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:29.305035   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:29.415252   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:29.421557   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:29.681884   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:29.801694   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:29.917990   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:29.920375   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:30.183992   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:30.303403   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:30.359440   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:30.416736   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:30.418359   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:30.679889   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:30.802012   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:30.916345   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:30.916485   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:31.151193   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:31.151227   10530 retry.go:31] will retry after 11.763207551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:31.179522   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:31.300872   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:31.417428   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:31.421535   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:31.683158   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:31.804166   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:31.917250   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:31.919814   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:32.180485   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:32.301448   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:32.414799   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:32.416565   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:32.682199   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:32.802085   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:32.918254   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:32.920864   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:33.180283   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:33.302044   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:33.418195   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:33.420283   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:33.682205   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:33.802900   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:33.915518   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:33.917060   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:34.183894   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:34.302424   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:34.418071   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:34.418937   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:34.681883   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:34.802739   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:34.913927   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:34.918879   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:35.348473   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:35.348627   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:35.447966   10530 kapi.go:107] duration metric: took 33.037496042s to wait for kubernetes.io/minikube-addons=registry ...
	I0926 22:30:35.448199   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:35.683550   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:35.802457   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:35.919287   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:36.178520   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:36.307082   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:36.415664   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:36.678900   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:36.803136   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:36.917411   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:37.185045   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:37.305913   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:37.630651   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:37.685375   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:37.802798   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:37.916719   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:38.181102   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:38.303094   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:38.417302   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:38.678435   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:38.801995   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:38.915065   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:39.178903   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:39.304329   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:39.416763   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:39.680033   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:39.801768   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:39.920400   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:40.180647   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:40.304347   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:40.416722   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:40.680569   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:40.803376   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:40.917005   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:41.180461   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:41.304146   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:41.417255   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:41.886447   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:41.888300   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:41.917365   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:42.180186   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:42.301635   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:42.419758   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:42.684808   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:42.804001   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:42.915430   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:42.923040   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:43.179997   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:43.306383   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:43.417022   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:43.682482   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:43.804992   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:43.922647   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:44.178880   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:44.240115   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.324639979s)
	W0926 22:30:44.240173   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:44.240195   10530 retry.go:31] will retry after 8.858097577s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:44.303169   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:44.418771   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:44.679551   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:44.801684   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:44.916013   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:45.179885   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:45.304426   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:45.428618   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:45.683426   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:45.810100   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:45.925137   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:46.179160   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:46.304364   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:46.448027   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:46.680201   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:46.805269   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:46.918049   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:47.181812   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:47.303700   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:47.415733   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:47.678623   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:47.808820   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:47.924088   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:48.180112   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:48.303763   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:48.424961   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:48.683665   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:48.803327   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:48.916118   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:49.178848   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:49.307797   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:49.416656   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:49.678851   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:49.802681   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:49.915714   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:50.180965   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:50.302266   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:50.415480   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:50.678616   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:50.804349   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:50.915318   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:51.184191   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:51.304048   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:51.418336   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:51.681435   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:51.804006   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:51.920620   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:52.183727   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:52.302182   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:52.416612   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:52.680540   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:52.804272   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:52.916855   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:53.099065   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:53.180672   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:53.305123   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:53.420113   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:53.685179   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:53.804757   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:53.917568   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:54.182857   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:54.302373   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:54.363811   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.264695675s)
	W0926 22:30:54.363881   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:54.363905   10530 retry.go:31] will retry after 15.55536091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:54.417539   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:54.681049   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:54.805028   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:54.915452   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:55.179696   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:55.301978   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:55.415794   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:55.679572   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:55.819347   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:55.918310   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:56.198401   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:56.304413   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:56.419426   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:56.680091   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:56.801779   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:56.918752   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:57.179612   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:57.301230   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:57.417433   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:57.681559   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:57.804383   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:57.917958   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:58.184656   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:58.306258   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:58.417260   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:58.698392   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:58.807597   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:58.915960   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:59.185696   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:59.303096   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:59.416022   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:59.683432   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:59.802671   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:59.916001   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:00.181296   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:00.301887   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:00.427020   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:00.678513   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:00.801870   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:00.920491   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:01.185028   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:01.304169   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:01.418926   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:01.685221   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:01.802805   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:01.915852   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:02.180224   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:02.310447   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:02.417773   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:02.684271   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:02.802160   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:02.917181   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:03.179667   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:03.305578   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:03.421443   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:03.679070   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:03.801937   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:03.915703   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:04.183143   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:04.303032   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:04.416888   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:04.681175   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:04.804024   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:04.931508   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:05.179817   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:05.303489   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:05.417042   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:05.679451   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:05.802120   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:05.918159   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:06.182494   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:06.401415   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:06.422627   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:06.679776   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:06.809902   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:06.918997   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:07.181491   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:07.302724   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:07.420205   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:07.680745   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:07.802430   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:07.917742   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:08.180112   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:08.301417   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:08.419665   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:08.679714   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:08.804244   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:08.918524   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:09.179876   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:09.302541   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:09.416678   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:09.680295   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:09.803785   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:09.916555   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:09.919538   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:31:10.182518   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:10.302156   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:10.417516   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:10.681589   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:10.803589   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:10.918491   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:11.184181   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:11.304515   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:11.419292   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:11.446493   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.526922683s)
	W0926 22:31:11.446528   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:31:11.446544   10530 retry.go:31] will retry after 18.44611829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:31:11.678436   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:11.807747   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:11.919354   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:12.183063   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:12.311693   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:12.420067   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:12.680144   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:12.802750   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:12.915380   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:13.178429   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:13.304983   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:13.473623   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:13.681102   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:13.802854   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:13.917953   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:14.183739   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:14.306018   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:14.646952   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:14.685595   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:14.802999   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:14.921890   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:15.181084   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:15.302376   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:15.419849   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:15.683746   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:16.022493   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:16.022587   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:16.182478   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:16.302322   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:16.418598   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:16.679927   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:16.808355   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:16.925473   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:17.186059   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:17.302020   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:17.427294   10530 kapi.go:107] duration metric: took 1m15.015851492s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0926 22:31:17.679432   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:17.802560   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:18.182037   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:18.300453   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:18.682444   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:18.804335   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:19.183050   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:19.303647   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:19.682844   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:19.801755   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:20.180116   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:20.303024   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:20.683340   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:20.802598   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:21.185647   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:21.303560   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:21.682723   10530 kapi.go:107] duration metric: took 1m15.507819233s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0926 22:31:21.684569   10530 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-330674 cluster.
	I0926 22:31:21.685984   10530 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0926 22:31:21.687420   10530 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0926 22:31:21.803101   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:22.301291   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:22.802797   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:23.304046   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:23.801813   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:24.302450   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:24.802449   10530 kapi.go:107] duration metric: took 1m21.505395208s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0926 22:31:29.894273   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0926 22:31:30.655606   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:31:30.655687   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:31:30.655705   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:31:30.655977   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:31:30.655997   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:31:30.656006   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:31:30.656013   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:31:30.656033   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:31:30.656218   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:31:30.656238   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:31:30.656214   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	W0926 22:31:30.656316   10530 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0926 22:31:30.659216   10530 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, cloud-spanner, registry-creds, ingress-dns, storage-provisioner, default-storageclass, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0926 22:31:30.660657   10530 addons.go:514] duration metric: took 1m38.321386508s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin cloud-spanner registry-creds ingress-dns storage-provisioner default-storageclass storage-provisioner-rancher metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0926 22:31:30.660695   10530 start.go:246] waiting for cluster config update ...
	I0926 22:31:30.660716   10530 start.go:255] writing updated cluster config ...
	I0926 22:31:30.660982   10530 ssh_runner.go:195] Run: rm -f paused
	I0926 22:31:30.667682   10530 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 22:31:30.672263   10530 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vcwdm" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:30.678377   10530 pod_ready.go:94] pod "coredns-66bc5c9577-vcwdm" is "Ready"
	I0926 22:31:30.678398   10530 pod_ready.go:86] duration metric: took 6.113857ms for pod "coredns-66bc5c9577-vcwdm" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:30.681561   10530 pod_ready.go:83] waiting for pod "etcd-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:30.687574   10530 pod_ready.go:94] pod "etcd-addons-330674" is "Ready"
	I0926 22:31:30.687599   10530 pod_ready.go:86] duration metric: took 6.011516ms for pod "etcd-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:30.690685   10530 pod_ready.go:83] waiting for pod "kube-apiserver-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:30.695334   10530 pod_ready.go:94] pod "kube-apiserver-addons-330674" is "Ready"
	I0926 22:31:30.695353   10530 pod_ready.go:86] duration metric: took 4.646437ms for pod "kube-apiserver-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:30.697972   10530 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:31.073074   10530 pod_ready.go:94] pod "kube-controller-manager-addons-330674" is "Ready"
	I0926 22:31:31.073098   10530 pod_ready.go:86] duration metric: took 375.106541ms for pod "kube-controller-manager-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:31.272175   10530 pod_ready.go:83] waiting for pod "kube-proxy-lldr6" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:31.672837   10530 pod_ready.go:94] pod "kube-proxy-lldr6" is "Ready"
	I0926 22:31:31.672859   10530 pod_ready.go:86] duration metric: took 400.65065ms for pod "kube-proxy-lldr6" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:31.872942   10530 pod_ready.go:83] waiting for pod "kube-scheduler-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:32.272335   10530 pod_ready.go:94] pod "kube-scheduler-addons-330674" is "Ready"
	I0926 22:31:32.272368   10530 pod_ready.go:86] duration metric: took 399.399542ms for pod "kube-scheduler-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:32.272382   10530 pod_ready.go:40] duration metric: took 1.604672258s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 22:31:32.319206   10530 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 22:31:32.320852   10530 out.go:179] * Done! kubectl is now configured to use "addons-330674" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.246534588Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758926300246503111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519332,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd44d87e-7124-4674-9265-9fc7e6b0b5cd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.247255180Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c561d427-db5d-4bed-80c2-53db6bf0d892 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.247340978Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c561d427-db5d-4bed-80c2-53db6bf0d892 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.247975152Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6b78ecb5174fb2b7f86cd2c4e767d94697649a74394ddfbca2309130d6eaa8c,PodSandboxId:b3f170d8fa06d1d92adb39a7915d41ba2dd5740703a6e0c23e6edf4dbe1e00e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758925895547677835,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 445fcb70-08b0-49c8-b65c-eda21a3d6feb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e668eda665a7a08b34feaa3af5faa4b856d4f2274289b47f59ac62775d884c7b,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1758925883829207657,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538a2e1c158d6e0ddd664a14b4f7f50a76ea8db010807a81cf19e75c642609c,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1758925882342359353,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ebcaf3e79e919d918b46cea972c486ae80c8d876a319eda1745f363dea05b5,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1758925877340520479,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041e5164edc9638ac7a5e3fb9b42dc3d246076e9c3024a78f6c14deca9aadc24,PodSandboxId:8725b0863596a05617b28f599b741b374f47553116849600ccb62872a79198c1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1758925876391663761,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-kbqsf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9dd82dc5-ecb0-431a-8606-e0b251a
33909,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:051406a4cc7e967f8fd03e63ab8bb5dbef64f6fb8e2ca56e77fac0d9cde5d0b0,PodSandboxId:1a394bb7ee033d4fc2928bdf9c7146d58a16612bf1d25551c70d873eb6356748,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0f
eee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758925872035993264,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vpbtt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae336bc2-9fe3-4fb6-993b-62ec6c833145,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4d7e9db5f9b62cd3f118452b38898613062052e6a73544db8762f91c8543664,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1758925868479357234,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c84352a2a8e683a652ad475b5e0f5655ca9517e730834ed24bbfd9441f90fe,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-r
egistrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1758925866899303215,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9c1d863f8e436fa0e4296d0bfefd78152e3f96d2caab6f0acb5c72f3c8dd4df,PodSandboxId:9dd7e0e52b989a77f23c4a05b1a811c382a9672cb44371141f84a6df218f03b9,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Im
age:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1758925865381688019,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b261b610-5540-4a39-af53-0a988f5316a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cd7128f9bd99749d9db7b92af36ff2646595fd5d80c6dea1069c4382a13d4a,PodSandboxId:d392af405e051900070076eccf981c9c49ee880242e8369dca1e725ea97a7fad,Metadata:&ContainerMetadata{Name:csi-resizer,Attemp
t:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1758925863323882758,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7afc9a-219f-4080-9fba-b24d07fadc30,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f6029288793e4abd0066aa6da1847c69b017784a5c35379a49b85eb7669403,PodSandboxId:5cdd7c9d00703096393b81c168e88cd01d6844aa45cc110a1814ee36f822d4fe,Metadata:&ContainerMetadata{N
ame:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1758925861639663693,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-n4kkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86602a14-6de0-44fe-99ba-f64d79426345,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065e1b8cc9a3878570b147f7b4019037cc2a1ce3c168b755fdcfa869fde88932,PodSandboxId:a815f5a2dbf404e19335
b4ed5bb0c565334c0dfd579d1b5cfb9d2ea7df6634f7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1758925861550751388,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-btkpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9d7b772-8f8e-4095-aaa6-fc9b1d68c681,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53bb00230c0915fb72c8594
f89b327ea93de60b21f74bb8bbea98be7af7d5c0,PodSandboxId:b1250bf09824f123677325475218e4cf4789bc966b6da72e7387e8d0c114dee5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758925859611809924,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2xzt8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e1bbf119-387c-430c-b64f-3412376a93d5,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},
&Container{Id:ad63674ee61b764de0b4292e936526c5fff997d5c918e38479959dd7ad66d185,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1758925859472001083,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a156c91664dbba69de3c62daeddadd17f3ea62e719400eb7575de0edc7b237,PodSandboxId:9afc50bd4655284b9f7792b29a82a64de5dedc47aca1be7f59ac0cdba9596cc2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1758925851071855426,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c5fsh,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 1d4706ed-d612-42b6-8ce7-1c3b53174964,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d1b73931795de08ae3fe25c28a68cc48cf2a1f358986388b7e68cef1254a49,PodSandboxId:4a03161ad649c86ef5f6fababc00d5c61e2b112f0745952010807bb23df9c76b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1758925842020813340,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d20fd4fa-1f62-423e-a836-f66893f73949,},Annotations:map[string]string{io.kubern
etes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ce52a782ec641bc3054d3b7fecfbf5015f0255a42d1d8b2817d0e21a3cb64f,PodSandboxId:164540b56841d45bbea8b25fd820262a02ff3dc521d2483ef4d9fa6bf455840f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1758925807369906434,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cdb8s,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42dc693-f8dc-488e-a6df-11603c5146c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcddaa36c6f8e064b9e65b380137f789e7379644bdf02c4ce91a8481abe8aed,PodSandboxId:6f9b04761677876630b638de388847d0cd9b141a8301620dc9f0f8995da05593,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758925807170331625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 805513c7-5529-4f0e-bbe6-de0e474ba2ba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d80adcca025aaef75e6e06f57e8799486cfe77e98b93797c20bec0f4dab49ed,PodSandboxId:4a821382e4a7e40f22aaab81e8bb96cf30745916ba0c162f9efbaed010997c81,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758925793811470387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vcwdm,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 6a3371fb-cab7-4a7e-8907-e11b45338ed0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c093002446e01a4b5ed0e5bf25dd5e04c44bbdf58a99648d2615cbc9a8df29,PodSandboxId:e6bd3271dd6ac5f8ce745e3c6d5ed6c1c8b6e94486e2549e260561de7a8d9694,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758925792110209484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lldr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3500915-4e56-473c-8674-5ea502daaac6,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d546b62051d6981b70d7a64cf0bb498a74b8a5f034aea3d6ca372b748273dd08,PodSandboxId:423d307a9a2ff59da5cb2aee768cb0e27b277a107aac0035e742bc3536de2a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758925780689458095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b3ab0a34880a8a828bd4ec7b048073,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c14c61340bfb60319237ab9cdb7743d04777d104299829a2666627dc25b549ce,PodSandboxId:f8b0370a64577d26d2005616cef004867bab0ed7612bdb68674b97c0cd4ddc44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d40
74df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758925780691387127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ebb9a6bc31069e8c997f3161744cee,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96b63fa3232c4e36cd45a617624415a34216ab78bd0288ce20498e29c613de46,PodSandboxId:00739f8fdf1571de344a91ed170311f30ce26aae40b8fd9e24b9
f24e7340f067,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758925780648843877,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7596254403ac958c412ddaf08adf07c0,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71804cd6c0cd12a68a0fcc997
88afd0951532dc500dcac6297763fb881c5193,PodSandboxId:a5800cbdc6985f866308b5ec875d6185a6c0c7223e4b69157d6014fad076bb3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758925780660818663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd7d325e4c1d60f88ed2ac4cd01e5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c561d427-db5d-4bed-80c2-53db6bf0d892 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.291052793Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4947db2-3060-4c60-8906-f5e0a089fb1b name=/runtime.v1.RuntimeService/Version
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.291279544Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4947db2-3060-4c60-8906-f5e0a089fb1b name=/runtime.v1.RuntimeService/Version
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.292581992Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c072fed-17ed-4d12-9a9e-c4a56062958f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.293834398Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758926300293804026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519332,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c072fed-17ed-4d12-9a9e-c4a56062958f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.294579182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8192feb9-8c0d-4dc5-a448-985abb8045d5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.294683568Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8192feb9-8c0d-4dc5-a448-985abb8045d5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.295234558Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6b78ecb5174fb2b7f86cd2c4e767d94697649a74394ddfbca2309130d6eaa8c,PodSandboxId:b3f170d8fa06d1d92adb39a7915d41ba2dd5740703a6e0c23e6edf4dbe1e00e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758925895547677835,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 445fcb70-08b0-49c8-b65c-eda21a3d6feb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e668eda665a7a08b34feaa3af5faa4b856d4f2274289b47f59ac62775d884c7b,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1758925883829207657,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538a2e1c158d6e0ddd664a14b4f7f50a76ea8db010807a81cf19e75c642609c,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1758925882342359353,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ebcaf3e79e919d918b46cea972c486ae80c8d876a319eda1745f363dea05b5,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1758925877340520479,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041e5164edc9638ac7a5e3fb9b42dc3d246076e9c3024a78f6c14deca9aadc24,PodSandboxId:8725b0863596a05617b28f599b741b374f47553116849600ccb62872a79198c1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1758925876391663761,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-kbqsf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9dd82dc5-ecb0-431a-8606-e0b251a
33909,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:051406a4cc7e967f8fd03e63ab8bb5dbef64f6fb8e2ca56e77fac0d9cde5d0b0,PodSandboxId:1a394bb7ee033d4fc2928bdf9c7146d58a16612bf1d25551c70d873eb6356748,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0f
eee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758925872035993264,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vpbtt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae336bc2-9fe3-4fb6-993b-62ec6c833145,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4d7e9db5f9b62cd3f118452b38898613062052e6a73544db8762f91c8543664,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1758925868479357234,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c84352a2a8e683a652ad475b5e0f5655ca9517e730834ed24bbfd9441f90fe,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-r
egistrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1758925866899303215,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9c1d863f8e436fa0e4296d0bfefd78152e3f96d2caab6f0acb5c72f3c8dd4df,PodSandboxId:9dd7e0e52b989a77f23c4a05b1a811c382a9672cb44371141f84a6df218f03b9,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Im
age:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1758925865381688019,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b261b610-5540-4a39-af53-0a988f5316a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cd7128f9bd99749d9db7b92af36ff2646595fd5d80c6dea1069c4382a13d4a,PodSandboxId:d392af405e051900070076eccf981c9c49ee880242e8369dca1e725ea97a7fad,Metadata:&ContainerMetadata{Name:csi-resizer,Attemp
t:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1758925863323882758,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7afc9a-219f-4080-9fba-b24d07fadc30,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f6029288793e4abd0066aa6da1847c69b017784a5c35379a49b85eb7669403,PodSandboxId:5cdd7c9d00703096393b81c168e88cd01d6844aa45cc110a1814ee36f822d4fe,Metadata:&ContainerMetadata{N
ame:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1758925861639663693,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-n4kkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86602a14-6de0-44fe-99ba-f64d79426345,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065e1b8cc9a3878570b147f7b4019037cc2a1ce3c168b755fdcfa869fde88932,PodSandboxId:a815f5a2dbf404e19335
b4ed5bb0c565334c0dfd579d1b5cfb9d2ea7df6634f7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1758925861550751388,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-btkpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9d7b772-8f8e-4095-aaa6-fc9b1d68c681,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53bb00230c0915fb72c8594
f89b327ea93de60b21f74bb8bbea98be7af7d5c0,PodSandboxId:b1250bf09824f123677325475218e4cf4789bc966b6da72e7387e8d0c114dee5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758925859611809924,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2xzt8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e1bbf119-387c-430c-b64f-3412376a93d5,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},
&Container{Id:ad63674ee61b764de0b4292e936526c5fff997d5c918e38479959dd7ad66d185,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1758925859472001083,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a156c91664dbba69de3c62daeddadd17f3ea62e719400eb7575de0edc7b237,PodSandboxId:9afc50bd4655284b9f7792b29a82a64de5dedc47aca1be7f59ac0cdba9596cc2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1758925851071855426,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c5fsh,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 1d4706ed-d612-42b6-8ce7-1c3b53174964,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d1b73931795de08ae3fe25c28a68cc48cf2a1f358986388b7e68cef1254a49,PodSandboxId:4a03161ad649c86ef5f6fababc00d5c61e2b112f0745952010807bb23df9c76b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1758925842020813340,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d20fd4fa-1f62-423e-a836-f66893f73949,},Annotations:map[string]string{io.kubern
etes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ce52a782ec641bc3054d3b7fecfbf5015f0255a42d1d8b2817d0e21a3cb64f,PodSandboxId:164540b56841d45bbea8b25fd820262a02ff3dc521d2483ef4d9fa6bf455840f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1758925807369906434,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cdb8s,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42dc693-f8dc-488e-a6df-11603c5146c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcddaa36c6f8e064b9e65b380137f789e7379644bdf02c4ce91a8481abe8aed,PodSandboxId:6f9b04761677876630b638de388847d0cd9b141a8301620dc9f0f8995da05593,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758925807170331625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 805513c7-5529-4f0e-bbe6-de0e474ba2ba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d80adcca025aaef75e6e06f57e8799486cfe77e98b93797c20bec0f4dab49ed,PodSandboxId:4a821382e4a7e40f22aaab81e8bb96cf30745916ba0c162f9efbaed010997c81,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758925793811470387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vcwdm,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 6a3371fb-cab7-4a7e-8907-e11b45338ed0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c093002446e01a4b5ed0e5bf25dd5e04c44bbdf58a99648d2615cbc9a8df29,PodSandboxId:e6bd3271dd6ac5f8ce745e3c6d5ed6c1c8b6e94486e2549e260561de7a8d9694,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758925792110209484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lldr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3500915-4e56-473c-8674-5ea502daaac6,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d546b62051d6981b70d7a64cf0bb498a74b8a5f034aea3d6ca372b748273dd08,PodSandboxId:423d307a9a2ff59da5cb2aee768cb0e27b277a107aac0035e742bc3536de2a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758925780689458095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b3ab0a34880a8a828bd4ec7b048073,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c14c61340bfb60319237ab9cdb7743d04777d104299829a2666627dc25b549ce,PodSandboxId:f8b0370a64577d26d2005616cef004867bab0ed7612bdb68674b97c0cd4ddc44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d40
74df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758925780691387127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ebb9a6bc31069e8c997f3161744cee,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96b63fa3232c4e36cd45a617624415a34216ab78bd0288ce20498e29c613de46,PodSandboxId:00739f8fdf1571de344a91ed170311f30ce26aae40b8fd9e24b9
f24e7340f067,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758925780648843877,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7596254403ac958c412ddaf08adf07c0,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71804cd6c0cd12a68a0fcc997
88afd0951532dc500dcac6297763fb881c5193,PodSandboxId:a5800cbdc6985f866308b5ec875d6185a6c0c7223e4b69157d6014fad076bb3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758925780660818663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd7d325e4c1d60f88ed2ac4cd01e5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8192feb9-8c0d-4dc5-a448-985abb8045d5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.333320683Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc9ea687-ceee-4a67-a5b1-517e41b1c4e0 name=/runtime.v1.RuntimeService/Version
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.333540205Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc9ea687-ceee-4a67-a5b1-517e41b1c4e0 name=/runtime.v1.RuntimeService/Version
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.335416058Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a367d7eb-6094-4445-af30-a9384fca4858 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.336870837Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758926300336845982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519332,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a367d7eb-6094-4445-af30-a9384fca4858 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.337836246Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55a33b04-356f-4755-924d-7f31353c7796 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.337899350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55a33b04-356f-4755-924d-7f31353c7796 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.338609233Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6b78ecb5174fb2b7f86cd2c4e767d94697649a74394ddfbca2309130d6eaa8c,PodSandboxId:b3f170d8fa06d1d92adb39a7915d41ba2dd5740703a6e0c23e6edf4dbe1e00e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758925895547677835,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 445fcb70-08b0-49c8-b65c-eda21a3d6feb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e668eda665a7a08b34feaa3af5faa4b856d4f2274289b47f59ac62775d884c7b,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1758925883829207657,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538a2e1c158d6e0ddd664a14b4f7f50a76ea8db010807a81cf19e75c642609c,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1758925882342359353,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ebcaf3e79e919d918b46cea972c486ae80c8d876a319eda1745f363dea05b5,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1758925877340520479,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041e5164edc9638ac7a5e3fb9b42dc3d246076e9c3024a78f6c14deca9aadc24,PodSandboxId:8725b0863596a05617b28f599b741b374f47553116849600ccb62872a79198c1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1758925876391663761,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-kbqsf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9dd82dc5-ecb0-431a-8606-e0b251a
33909,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:051406a4cc7e967f8fd03e63ab8bb5dbef64f6fb8e2ca56e77fac0d9cde5d0b0,PodSandboxId:1a394bb7ee033d4fc2928bdf9c7146d58a16612bf1d25551c70d873eb6356748,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0f
eee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758925872035993264,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vpbtt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae336bc2-9fe3-4fb6-993b-62ec6c833145,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4d7e9db5f9b62cd3f118452b38898613062052e6a73544db8762f91c8543664,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1758925868479357234,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c84352a2a8e683a652ad475b5e0f5655ca9517e730834ed24bbfd9441f90fe,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-r
egistrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1758925866899303215,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9c1d863f8e436fa0e4296d0bfefd78152e3f96d2caab6f0acb5c72f3c8dd4df,PodSandboxId:9dd7e0e52b989a77f23c4a05b1a811c382a9672cb44371141f84a6df218f03b9,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Im
age:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1758925865381688019,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b261b610-5540-4a39-af53-0a988f5316a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cd7128f9bd99749d9db7b92af36ff2646595fd5d80c6dea1069c4382a13d4a,PodSandboxId:d392af405e051900070076eccf981c9c49ee880242e8369dca1e725ea97a7fad,Metadata:&ContainerMetadata{Name:csi-resizer,Attemp
t:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1758925863323882758,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7afc9a-219f-4080-9fba-b24d07fadc30,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f6029288793e4abd0066aa6da1847c69b017784a5c35379a49b85eb7669403,PodSandboxId:5cdd7c9d00703096393b81c168e88cd01d6844aa45cc110a1814ee36f822d4fe,Metadata:&ContainerMetadata{N
ame:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1758925861639663693,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-n4kkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86602a14-6de0-44fe-99ba-f64d79426345,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065e1b8cc9a3878570b147f7b4019037cc2a1ce3c168b755fdcfa869fde88932,PodSandboxId:a815f5a2dbf404e19335
b4ed5bb0c565334c0dfd579d1b5cfb9d2ea7df6634f7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1758925861550751388,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-btkpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9d7b772-8f8e-4095-aaa6-fc9b1d68c681,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53bb00230c0915fb72c8594
f89b327ea93de60b21f74bb8bbea98be7af7d5c0,PodSandboxId:b1250bf09824f123677325475218e4cf4789bc966b6da72e7387e8d0c114dee5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758925859611809924,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2xzt8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e1bbf119-387c-430c-b64f-3412376a93d5,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},
&Container{Id:ad63674ee61b764de0b4292e936526c5fff997d5c918e38479959dd7ad66d185,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1758925859472001083,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a156c91664dbba69de3c62daeddadd17f3ea62e719400eb7575de0edc7b237,PodSandboxId:9afc50bd4655284b9f7792b29a82a64de5dedc47aca1be7f59ac0cdba9596cc2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1758925851071855426,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c5fsh,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 1d4706ed-d612-42b6-8ce7-1c3b53174964,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d1b73931795de08ae3fe25c28a68cc48cf2a1f358986388b7e68cef1254a49,PodSandboxId:4a03161ad649c86ef5f6fababc00d5c61e2b112f0745952010807bb23df9c76b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1758925842020813340,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d20fd4fa-1f62-423e-a836-f66893f73949,},Annotations:map[string]string{io.kubern
etes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ce52a782ec641bc3054d3b7fecfbf5015f0255a42d1d8b2817d0e21a3cb64f,PodSandboxId:164540b56841d45bbea8b25fd820262a02ff3dc521d2483ef4d9fa6bf455840f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1758925807369906434,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cdb8s,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42dc693-f8dc-488e-a6df-11603c5146c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcddaa36c6f8e064b9e65b380137f789e7379644bdf02c4ce91a8481abe8aed,PodSandboxId:6f9b04761677876630b638de388847d0cd9b141a8301620dc9f0f8995da05593,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758925807170331625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 805513c7-5529-4f0e-bbe6-de0e474ba2ba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d80adcca025aaef75e6e06f57e8799486cfe77e98b93797c20bec0f4dab49ed,PodSandboxId:4a821382e4a7e40f22aaab81e8bb96cf30745916ba0c162f9efbaed010997c81,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758925793811470387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vcwdm,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 6a3371fb-cab7-4a7e-8907-e11b45338ed0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c093002446e01a4b5ed0e5bf25dd5e04c44bbdf58a99648d2615cbc9a8df29,PodSandboxId:e6bd3271dd6ac5f8ce745e3c6d5ed6c1c8b6e94486e2549e260561de7a8d9694,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758925792110209484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lldr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3500915-4e56-473c-8674-5ea502daaac6,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d546b62051d6981b70d7a64cf0bb498a74b8a5f034aea3d6ca372b748273dd08,PodSandboxId:423d307a9a2ff59da5cb2aee768cb0e27b277a107aac0035e742bc3536de2a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758925780689458095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b3ab0a34880a8a828bd4ec7b048073,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c14c61340bfb60319237ab9cdb7743d04777d104299829a2666627dc25b549ce,PodSandboxId:f8b0370a64577d26d2005616cef004867bab0ed7612bdb68674b97c0cd4ddc44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d40
74df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758925780691387127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ebb9a6bc31069e8c997f3161744cee,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96b63fa3232c4e36cd45a617624415a34216ab78bd0288ce20498e29c613de46,PodSandboxId:00739f8fdf1571de344a91ed170311f30ce26aae40b8fd9e24b9
f24e7340f067,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758925780648843877,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7596254403ac958c412ddaf08adf07c0,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71804cd6c0cd12a68a0fcc997
88afd0951532dc500dcac6297763fb881c5193,PodSandboxId:a5800cbdc6985f866308b5ec875d6185a6c0c7223e4b69157d6014fad076bb3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758925780660818663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd7d325e4c1d60f88ed2ac4cd01e5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55a33b04-356f-4755-924d-7f31353c7796 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.385834052Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db6cffd5-a59e-47cb-8ad8-f4f46b188e88 name=/runtime.v1.RuntimeService/Version
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.385934889Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db6cffd5-a59e-47cb-8ad8-f4f46b188e88 name=/runtime.v1.RuntimeService/Version
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.387202903Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3ad5715-44b3-4006-bc83-7c856e58797d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.389139004Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758926300389061221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519332,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3ad5715-44b3-4006-bc83-7c856e58797d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.390574820Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d22a82f3-590c-4ce2-b46d-76b22eb8006d name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.390801906Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d22a82f3-590c-4ce2-b46d-76b22eb8006d name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:38:20 addons-330674 crio[823]: time="2025-09-26 22:38:20.392342630Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6b78ecb5174fb2b7f86cd2c4e767d94697649a74394ddfbca2309130d6eaa8c,PodSandboxId:b3f170d8fa06d1d92adb39a7915d41ba2dd5740703a6e0c23e6edf4dbe1e00e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758925895547677835,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 445fcb70-08b0-49c8-b65c-eda21a3d6feb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e668eda665a7a08b34feaa3af5faa4b856d4f2274289b47f59ac62775d884c7b,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1758925883829207657,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538a2e1c158d6e0ddd664a14b4f7f50a76ea8db010807a81cf19e75c642609c,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1758925882342359353,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ebcaf3e79e919d918b46cea972c486ae80c8d876a319eda1745f363dea05b5,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1758925877340520479,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041e5164edc9638ac7a5e3fb9b42dc3d246076e9c3024a78f6c14deca9aadc24,PodSandboxId:8725b0863596a05617b28f599b741b374f47553116849600ccb62872a79198c1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1758925876391663761,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-kbqsf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9dd82dc5-ecb0-431a-8606-e0b251a
33909,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:051406a4cc7e967f8fd03e63ab8bb5dbef64f6fb8e2ca56e77fac0d9cde5d0b0,PodSandboxId:1a394bb7ee033d4fc2928bdf9c7146d58a16612bf1d25551c70d873eb6356748,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0f
eee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758925872035993264,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vpbtt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae336bc2-9fe3-4fb6-993b-62ec6c833145,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4d7e9db5f9b62cd3f118452b38898613062052e6a73544db8762f91c8543664,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1758925868479357234,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c84352a2a8e683a652ad475b5e0f5655ca9517e730834ed24bbfd9441f90fe,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-r
egistrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1758925866899303215,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9c1d863f8e436fa0e4296d0bfefd78152e3f96d2caab6f0acb5c72f3c8dd4df,PodSandboxId:9dd7e0e52b989a77f23c4a05b1a811c382a9672cb44371141f84a6df218f03b9,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Im
age:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1758925865381688019,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b261b610-5540-4a39-af53-0a988f5316a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cd7128f9bd99749d9db7b92af36ff2646595fd5d80c6dea1069c4382a13d4a,PodSandboxId:d392af405e051900070076eccf981c9c49ee880242e8369dca1e725ea97a7fad,Metadata:&ContainerMetadata{Name:csi-resizer,Attemp
t:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1758925863323882758,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7afc9a-219f-4080-9fba-b24d07fadc30,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f6029288793e4abd0066aa6da1847c69b017784a5c35379a49b85eb7669403,PodSandboxId:5cdd7c9d00703096393b81c168e88cd01d6844aa45cc110a1814ee36f822d4fe,Metadata:&ContainerMetadata{N
ame:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1758925861639663693,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-n4kkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86602a14-6de0-44fe-99ba-f64d79426345,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065e1b8cc9a3878570b147f7b4019037cc2a1ce3c168b755fdcfa869fde88932,PodSandboxId:a815f5a2dbf404e19335
b4ed5bb0c565334c0dfd579d1b5cfb9d2ea7df6634f7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1758925861550751388,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-btkpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9d7b772-8f8e-4095-aaa6-fc9b1d68c681,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53bb00230c0915fb72c8594
f89b327ea93de60b21f74bb8bbea98be7af7d5c0,PodSandboxId:b1250bf09824f123677325475218e4cf4789bc966b6da72e7387e8d0c114dee5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758925859611809924,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2xzt8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e1bbf119-387c-430c-b64f-3412376a93d5,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},
&Container{Id:ad63674ee61b764de0b4292e936526c5fff997d5c918e38479959dd7ad66d185,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1758925859472001083,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a156c91664dbba69de3c62daeddadd17f3ea62e719400eb7575de0edc7b237,PodSandboxId:9afc50bd4655284b9f7792b29a82a64de5dedc47aca1be7f59ac0cdba9596cc2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1758925851071855426,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c5fsh,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 1d4706ed-d612-42b6-8ce7-1c3b53174964,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d1b73931795de08ae3fe25c28a68cc48cf2a1f358986388b7e68cef1254a49,PodSandboxId:4a03161ad649c86ef5f6fababc00d5c61e2b112f0745952010807bb23df9c76b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1758925842020813340,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d20fd4fa-1f62-423e-a836-f66893f73949,},Annotations:map[string]string{io.kubern
etes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ce52a782ec641bc3054d3b7fecfbf5015f0255a42d1d8b2817d0e21a3cb64f,PodSandboxId:164540b56841d45bbea8b25fd820262a02ff3dc521d2483ef4d9fa6bf455840f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1758925807369906434,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cdb8s,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42dc693-f8dc-488e-a6df-11603c5146c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcddaa36c6f8e064b9e65b380137f789e7379644bdf02c4ce91a8481abe8aed,PodSandboxId:6f9b04761677876630b638de388847d0cd9b141a8301620dc9f0f8995da05593,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758925807170331625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 805513c7-5529-4f0e-bbe6-de0e474ba2ba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d80adcca025aaef75e6e06f57e8799486cfe77e98b93797c20bec0f4dab49ed,PodSandboxId:4a821382e4a7e40f22aaab81e8bb96cf30745916ba0c162f9efbaed010997c81,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758925793811470387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vcwdm,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 6a3371fb-cab7-4a7e-8907-e11b45338ed0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c093002446e01a4b5ed0e5bf25dd5e04c44bbdf58a99648d2615cbc9a8df29,PodSandboxId:e6bd3271dd6ac5f8ce745e3c6d5ed6c1c8b6e94486e2549e260561de7a8d9694,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758925792110209484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lldr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3500915-4e56-473c-8674-5ea502daaac6,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d546b62051d6981b70d7a64cf0bb498a74b8a5f034aea3d6ca372b748273dd08,PodSandboxId:423d307a9a2ff59da5cb2aee768cb0e27b277a107aac0035e742bc3536de2a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758925780689458095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b3ab0a34880a8a828bd4ec7b048073,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c14c61340bfb60319237ab9cdb7743d04777d104299829a2666627dc25b549ce,PodSandboxId:f8b0370a64577d26d2005616cef004867bab0ed7612bdb68674b97c0cd4ddc44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d40
74df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758925780691387127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ebb9a6bc31069e8c997f3161744cee,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96b63fa3232c4e36cd45a617624415a34216ab78bd0288ce20498e29c613de46,PodSandboxId:00739f8fdf1571de344a91ed170311f30ce26aae40b8fd9e24b9
f24e7340f067,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758925780648843877,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7596254403ac958c412ddaf08adf07c0,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71804cd6c0cd12a68a0fcc997
88afd0951532dc500dcac6297763fb881c5193,PodSandboxId:a5800cbdc6985f866308b5ec875d6185a6c0c7223e4b69157d6014fad076bb3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758925780660818663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd7d325e4c1d60f88ed2ac4cd01e5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d22a82f3-590c-4ce2-b46d-76b22eb8006d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	c6b78ecb5174f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   b3f170d8fa06d       busybox
	e668eda665a7a       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   c2124a5b8f4d4       csi-hostpathplugin-mk92b
	b538a2e1c158d       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          6 minutes ago       Running             csi-provisioner                          0                   c2124a5b8f4d4       csi-hostpathplugin-mk92b
	a4ebcaf3e79e9       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   c2124a5b8f4d4       csi-hostpathplugin-mk92b
	041e5164edc96       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             7 minutes ago       Running             controller                               0                   8725b0863596a       ingress-nginx-controller-9cc49f96f-kbqsf
	051406a4cc7e9       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                                             7 minutes ago       Exited              patch                                    2                   1a394bb7ee033       ingress-nginx-admission-patch-vpbtt
	c4d7e9db5f9b6       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           7 minutes ago       Running             hostpath                                 0                   c2124a5b8f4d4       csi-hostpathplugin-mk92b
	a2c84352a2a8e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   c2124a5b8f4d4       csi-hostpathplugin-mk92b
	a9c1d863f8e43       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             7 minutes ago       Running             csi-attacher                             0                   9dd7e0e52b989       csi-hostpath-attacher-0
	f0cd7128f9bd9       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              7 minutes ago       Running             csi-resizer                              0                   d392af405e051       csi-hostpath-resizer-0
	71f6029288793       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   5cdd7c9d00703       snapshot-controller-7d9fbc56b8-n4kkw
	065e1b8cc9a38       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   a815f5a2dbf40       snapshot-controller-7d9fbc56b8-btkpl
	d53bb00230c09       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   7 minutes ago       Exited              create                                   0                   b1250bf09824f       ingress-nginx-admission-create-2xzt8
	ad63674ee61b7       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   7 minutes ago       Running             csi-external-health-monitor-controller   0                   c2124a5b8f4d4       csi-hostpathplugin-mk92b
	79a156c91664d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            7 minutes ago       Running             gadget                                   0                   9afc50bd46552       gadget-c5fsh
	08d1b73931795       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               7 minutes ago       Running             minikube-ingress-dns                     0                   4a03161ad649c       kube-ingress-dns-minikube
	22ce52a782ec6       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     8 minutes ago       Running             amd-gpu-device-plugin                    0                   164540b56841d       amd-gpu-device-plugin-cdb8s
	7dcddaa36c6f8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             8 minutes ago       Running             storage-provisioner                      0                   6f9b047616778       storage-provisioner
	4d80adcca025a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             8 minutes ago       Running             coredns                                  0                   4a821382e4a7e       coredns-66bc5c9577-vcwdm
	91c093002446e       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                                             8 minutes ago       Running             kube-proxy                               0                   e6bd3271dd6ac       kube-proxy-lldr6
	c14c61340bfb6       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                                             8 minutes ago       Running             kube-controller-manager                  0                   f8b0370a64577       kube-controller-manager-addons-330674
	d546b62051d69       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             8 minutes ago       Running             etcd                                     0                   423d307a9a2ff       etcd-addons-330674
	d71804cd6c0cd       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                                             8 minutes ago       Running             kube-scheduler                           0                   a5800cbdc6985       kube-scheduler-addons-330674
	96b63fa3232c4       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                                             8 minutes ago       Running             kube-apiserver                           0                   00739f8fdf157       kube-apiserver-addons-330674
	
	
	==> coredns [4d80adcca025aaef75e6e06f57e8799486cfe77e98b93797c20bec0f4dab49ed] <==
	[INFO] 10.244.0.8:47574 - 21867 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000257654s
	[INFO] 10.244.0.8:47574 - 26774 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000212097s
	[INFO] 10.244.0.8:47574 - 28009 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.001958391s
	[INFO] 10.244.0.8:47574 - 26885 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000119263s
	[INFO] 10.244.0.8:47574 - 5147 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000092914s
	[INFO] 10.244.0.8:47574 - 63848 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00014382s
	[INFO] 10.244.0.8:47574 - 22153 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00016125s
	[INFO] 10.244.0.8:33854 - 30980 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000169209s
	[INFO] 10.244.0.8:33854 - 31323 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000369966s
	[INFO] 10.244.0.8:44393 - 54969 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066964s
	[INFO] 10.244.0.8:44393 - 55232 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000145932s
	[INFO] 10.244.0.8:38008 - 63546 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000148543s
	[INFO] 10.244.0.8:38008 - 63995 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000188374s
	[INFO] 10.244.0.8:57521 - 19791 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000072445s
	[INFO] 10.244.0.8:57521 - 19991 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000105577s
	[INFO] 10.244.0.23:33438 - 31331 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00059389s
	[INFO] 10.244.0.23:52290 - 40336 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000131355s
	[INFO] 10.244.0.23:36973 - 47600 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124178s
	[INFO] 10.244.0.23:58766 - 34961 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000284537s
	[INFO] 10.244.0.23:51619 - 10278 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000077755s
	[INFO] 10.244.0.23:56734 - 63793 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000152417s
	[INFO] 10.244.0.23:44833 - 26370 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000890787s
	[INFO] 10.244.0.23:51260 - 4851 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001537806s
	[INFO] 10.244.0.26:37540 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000260275s
	[INFO] 10.244.0.26:54969 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00023223s
	
	
	==> describe nodes <==
	Name:               addons-330674
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-330674
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=addons-330674
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_29_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-330674
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-330674"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:29:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-330674
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:38:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:35:04 +0000   Fri, 26 Sep 2025 22:29:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:35:04 +0000   Fri, 26 Sep 2025 22:29:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:35:04 +0000   Fri, 26 Sep 2025 22:29:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:35:04 +0000   Fri, 26 Sep 2025 22:29:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.36
	  Hostname:    addons-330674
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 0270d5ce774d47cc84b7b73291b9eb86
	  System UUID:                0270d5ce-774d-47cc-84b7-b73291b9eb86
	  Boot ID:                    261e85a6-9bd4-4867-9bbb-7559b9c83c19
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  default                     test-local-path                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  gadget                      gadget-c5fsh                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m20s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-kbqsf    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         8m18s
	  kube-system                 amd-gpu-device-plugin-cdb8s                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 coredns-66bc5c9577-vcwdm                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m28s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 csi-hostpathplugin-mk92b                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 etcd-addons-330674                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m34s
	  kube-system                 kube-apiserver-addons-330674                250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 kube-controller-manager-addons-330674       200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 kube-proxy-lldr6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 kube-scheduler-addons-330674                100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 snapshot-controller-7d9fbc56b8-btkpl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 snapshot-controller-7d9fbc56b8-n4kkw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m27s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m42s (x8 over 8m42s)  kubelet          Node addons-330674 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m42s (x8 over 8m42s)  kubelet          Node addons-330674 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m42s (x7 over 8m42s)  kubelet          Node addons-330674 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m34s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m34s                  kubelet          Node addons-330674 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m34s                  kubelet          Node addons-330674 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m34s                  kubelet          Node addons-330674 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m33s                  kubelet          Node addons-330674 status is now: NodeReady
	  Normal  RegisteredNode           8m30s                  node-controller  Node addons-330674 event: Registered Node addons-330674 in Controller
	
	
	==> dmesg <==
	[  +0.132133] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.398131] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.148040] kauditd_printk_skb: 243 callbacks suppressed
	[Sep26 22:30] kauditd_printk_skb: 245 callbacks suppressed
	[  +0.000005] kauditd_printk_skb: 357 callbacks suppressed
	[ +15.526203] kauditd_printk_skb: 172 callbacks suppressed
	[  +5.602328] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.205959] kauditd_printk_skb: 32 callbacks suppressed
	[  +8.429608] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.063342] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.131781] kauditd_printk_skb: 20 callbacks suppressed
	[Sep26 22:31] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.000062] kauditd_printk_skb: 113 callbacks suppressed
	[  +1.652404] kauditd_printk_skb: 121 callbacks suppressed
	[  +3.064622] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.314802] kauditd_printk_skb: 89 callbacks suppressed
	[  +5.826353] kauditd_printk_skb: 5 callbacks suppressed
	[  +2.328743] kauditd_printk_skb: 38 callbacks suppressed
	[  +8.807828] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.004597] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.597860] kauditd_printk_skb: 38 callbacks suppressed
	[Sep26 22:32] kauditd_printk_skb: 99 callbacks suppressed
	[  +0.934983] kauditd_printk_skb: 118 callbacks suppressed
	[  +0.000162] kauditd_printk_skb: 173 callbacks suppressed
	[ +19.393202] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [d546b62051d6981b70d7a64cf0bb498a74b8a5f034aea3d6ca372b748273dd08] <==
	{"level":"info","ts":"2025-09-26T22:30:41.876982Z","caller":"traceutil/trace.go:172","msg":"trace[1158650298] transaction","detail":"{read_only:false; response_revision:991; number_of_response:1; }","duration":"217.468457ms","start":"2025-09-26T22:30:41.659503Z","end":"2025-09-26T22:30:41.876971Z","steps":["trace[1158650298] 'process raft request'  (duration: 217.358381ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:30:41.877725Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.834403ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:30:41.878678Z","caller":"traceutil/trace.go:172","msg":"trace[133223068] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:991; }","duration":"206.79653ms","start":"2025-09-26T22:30:41.671867Z","end":"2025-09-26T22:30:41.878664Z","steps":["trace[133223068] 'agreement among raft nodes before linearized reading'  (duration: 205.301511ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:30:58.688471Z","caller":"traceutil/trace.go:172","msg":"trace[1166098963] transaction","detail":"{read_only:false; response_revision:1047; number_of_response:1; }","duration":"115.285521ms","start":"2025-09-26T22:30:58.573171Z","end":"2025-09-26T22:30:58.688457Z","steps":["trace[1166098963] 'process raft request'  (duration: 115.158938ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:06.385950Z","caller":"traceutil/trace.go:172","msg":"trace[171528856] transaction","detail":"{read_only:false; response_revision:1110; number_of_response:1; }","duration":"207.371807ms","start":"2025-09-26T22:31:06.178555Z","end":"2025-09-26T22:31:06.385927Z","steps":["trace[171528856] 'process raft request'  (duration: 207.21509ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:13.467583Z","caller":"traceutil/trace.go:172","msg":"trace[2032340984] transaction","detail":"{read_only:false; response_revision:1152; number_of_response:1; }","duration":"148.79533ms","start":"2025-09-26T22:31:13.318772Z","end":"2025-09-26T22:31:13.467568Z","steps":["trace[2032340984] 'process raft request'  (duration: 148.072718ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:14.637720Z","caller":"traceutil/trace.go:172","msg":"trace[1422923240] linearizableReadLoop","detail":"{readStateIndex:1185; appliedIndex:1185; }","duration":"228.404518ms","start":"2025-09-26T22:31:14.409297Z","end":"2025-09-26T22:31:14.637701Z","steps":["trace[1422923240] 'read index received'  (duration: 228.396687ms)","trace[1422923240] 'applied index is now lower than readState.Index'  (duration: 6.717µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:31:14.637858Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"228.541405ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:31:14.637889Z","caller":"traceutil/trace.go:172","msg":"trace[1734423282] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1152; }","duration":"228.589602ms","start":"2025-09-26T22:31:14.409293Z","end":"2025-09-26T22:31:14.637882Z","steps":["trace[1734423282] 'agreement among raft nodes before linearized reading'  (duration: 228.514609ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:14.637888Z","caller":"traceutil/trace.go:172","msg":"trace[1864404804] transaction","detail":"{read_only:false; response_revision:1153; number_of_response:1; }","duration":"251.449676ms","start":"2025-09-26T22:31:14.386428Z","end":"2025-09-26T22:31:14.637877Z","steps":["trace[1864404804] 'process raft request'  (duration: 251.335525ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:31:14.638161Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"167.799737ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-09-26T22:31:14.638184Z","caller":"traceutil/trace.go:172","msg":"trace[586291321] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1153; }","duration":"167.828944ms","start":"2025-09-26T22:31:14.470349Z","end":"2025-09-26T22:31:14.638178Z","steps":["trace[586291321] 'agreement among raft nodes before linearized reading'  (duration: 167.686895ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:16.005233Z","caller":"traceutil/trace.go:172","msg":"trace[1859190441] linearizableReadLoop","detail":"{readStateIndex:1191; appliedIndex:1191; }","duration":"205.698958ms","start":"2025-09-26T22:31:15.799518Z","end":"2025-09-26T22:31:16.005217Z","steps":["trace[1859190441] 'read index received'  (duration: 205.694211ms)","trace[1859190441] 'applied index is now lower than readState.Index'  (duration: 3.689µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:31:16.005429Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.897121ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:31:16.005489Z","caller":"traceutil/trace.go:172","msg":"trace[1859758599] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1158; }","duration":"205.970508ms","start":"2025-09-26T22:31:15.799512Z","end":"2025-09-26T22:31:16.005483Z","steps":["trace[1859758599] 'agreement among raft nodes before linearized reading'  (duration: 205.868975ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:31:16.005819Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"152.13092ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.36\" limit:1 ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2025-09-26T22:31:16.005907Z","caller":"traceutil/trace.go:172","msg":"trace[658261611] range","detail":"{range_begin:/registry/masterleases/192.168.39.36; range_end:; response_count:1; response_revision:1159; }","duration":"152.225874ms","start":"2025-09-26T22:31:15.853673Z","end":"2025-09-26T22:31:16.005899Z","steps":["trace[658261611] 'agreement among raft nodes before linearized reading'  (duration: 152.075231ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:16.006294Z","caller":"traceutil/trace.go:172","msg":"trace[630460783] transaction","detail":"{read_only:false; response_revision:1159; number_of_response:1; }","duration":"208.955996ms","start":"2025-09-26T22:31:15.797328Z","end":"2025-09-26T22:31:16.006284Z","steps":["trace[630460783] 'process raft request'  (duration: 207.967404ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:20.645825Z","caller":"traceutil/trace.go:172","msg":"trace[1825086522] transaction","detail":"{read_only:false; response_revision:1196; number_of_response:1; }","duration":"142.24064ms","start":"2025-09-26T22:31:20.503572Z","end":"2025-09-26T22:31:20.645813Z","steps":["trace[1825086522] 'process raft request'  (duration: 142.114273ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:29.646399Z","caller":"traceutil/trace.go:172","msg":"trace[1097200160] transaction","detail":"{read_only:false; response_revision:1236; number_of_response:1; }","duration":"169.236279ms","start":"2025-09-26T22:31:29.477137Z","end":"2025-09-26T22:31:29.646373Z","steps":["trace[1097200160] 'process raft request'  (duration: 169.149315ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:59.038214Z","caller":"traceutil/trace.go:172","msg":"trace[287194860] linearizableReadLoop","detail":"{readStateIndex:1476; appliedIndex:1476; }","duration":"165.591492ms","start":"2025-09-26T22:31:58.872592Z","end":"2025-09-26T22:31:59.038183Z","steps":["trace[287194860] 'read index received'  (duration: 165.586106ms)","trace[287194860] 'applied index is now lower than readState.Index'  (duration: 4.553µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:31:59.038434Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.843902ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:31:59.038513Z","caller":"traceutil/trace.go:172","msg":"trace[1185068248] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1431; }","duration":"165.936326ms","start":"2025-09-26T22:31:58.872567Z","end":"2025-09-26T22:31:59.038503Z","steps":["trace[1185068248] 'agreement among raft nodes before linearized reading'  (duration: 165.795637ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:59.038517Z","caller":"traceutil/trace.go:172","msg":"trace[270500941] transaction","detail":"{read_only:false; response_revision:1432; number_of_response:1; }","duration":"228.1352ms","start":"2025-09-26T22:31:58.810371Z","end":"2025-09-26T22:31:59.038506Z","steps":["trace[270500941] 'process raft request'  (duration: 227.991076ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:32:29.353426Z","caller":"traceutil/trace.go:172","msg":"trace[1234175230] transaction","detail":"{read_only:false; response_revision:1649; number_of_response:1; }","duration":"102.357293ms","start":"2025-09-26T22:32:29.251056Z","end":"2025-09-26T22:32:29.353413Z","steps":["trace[1234175230] 'process raft request'  (duration: 102.235629ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:38:20 up 9 min,  0 users,  load average: 0.10, 0.49, 0.44
	Linux addons-330674 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [96b63fa3232c4e36cd45a617624415a34216ab78bd0288ce20498e29c613de46] <==
	E0926 22:30:46.464001       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0926 22:30:46.466655       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.239.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.239.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.239.144:443: connect: connection refused" logger="UnhandledError"
	E0926 22:30:46.470932       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.239.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.239.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.239.144:443: connect: connection refused" logger="UnhandledError"
	E0926 22:30:46.472988       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.239.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.239.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.239.144:443: connect: connection refused" logger="UnhandledError"
	I0926 22:30:46.609462       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0926 22:31:07.128656       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0926 22:31:41.108423       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:44004: use of closed network connection
	E0926 22:31:41.314606       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:44040: use of closed network connection
	I0926 22:31:45.954847       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:31:50.616495       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.104.222"}
	I0926 22:32:06.691946       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0926 22:32:06.935381       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.87.170"}
	I0926 22:32:31.077795       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:32:47.481193       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0926 22:33:04.609647       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:33:34.675154       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:34:33.914198       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:34:37.466185       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:35:58.428524       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:36:01.607736       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:37:20.743328       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:37:22.383179       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [c14c61340bfb60319237ab9cdb7743d04777d104299829a2666627dc25b549ce] <==
	I0926 22:29:50.964288       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0926 22:29:50.964435       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0926 22:29:50.964792       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0926 22:29:50.965035       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0926 22:29:50.965437       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0926 22:29:50.965524       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0926 22:29:50.965748       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0926 22:29:50.966394       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0926 22:29:50.966608       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0926 22:29:50.966942       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0926 22:29:50.972568       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0926 22:29:50.975319       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0926 22:29:50.987415       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	E0926 22:29:59.198407       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E0926 22:30:20.932946       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 22:30:20.933180       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0926 22:30:20.933255       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0926 22:30:20.964612       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0926 22:30:20.972932       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0926 22:30:21.033427       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:30:21.073363       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0926 22:31:50.680482       1 replica_set.go:587] "Unhandled Error" err="sync \"headlamp/headlamp-85f8f8dc54\" failed with pods \"headlamp-85f8f8dc54-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found" logger="UnhandledError"
	I0926 22:31:54.682707       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0926 22:32:05.933470       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0926 22:32:17.836527       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	
	
	==> kube-proxy [91c093002446e01a4b5ed0e5bf25dd5e04c44bbdf58a99648d2615cbc9a8df29] <==
	I0926 22:29:52.750738       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:29:52.855140       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:29:52.855184       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.36"]
	E0926 22:29:52.855251       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:29:53.034433       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0926 22:29:53.034497       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0926 22:29:53.034529       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:29:53.056167       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:29:53.056873       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:29:53.056887       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:29:53.081717       1 config.go:309] "Starting node config controller"
	I0926 22:29:53.081753       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:29:53.081761       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:29:53.082169       1 config.go:200] "Starting service config controller"
	I0926 22:29:53.082179       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:29:53.082197       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:29:53.082201       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:29:53.082211       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:29:53.082215       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:29:53.183212       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0926 22:29:53.183245       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:29:53.183259       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d71804cd6c0cd12a68a0fcc99788afd0951532dc500dcac6297763fb881c5193] <==
	E0926 22:29:43.950921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:29:43.950997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:29:43.952216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0926 22:29:43.952553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:29:43.952622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:29:43.953940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:29:43.954122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:29:43.954127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 22:29:43.955446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:29:43.955726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:29:43.955808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:29:43.956032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:29:43.956048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:29:44.761681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:29:44.783680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:29:44.813163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:29:44.863573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:29:44.938817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 22:29:44.949980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:29:45.133806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:29:45.176477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:29:45.243697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:29:45.335227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:29:45.431436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0926 22:29:48.238209       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 26 22:37:07 addons-330674 kubelet[1505]: E0926 22:37:07.097758    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926227097416587  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:37:17 addons-330674 kubelet[1505]: E0926 22:37:17.100739    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926237100326103  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:37:17 addons-330674 kubelet[1505]: E0926 22:37:17.100764    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926237100326103  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:37:19 addons-330674 kubelet[1505]: E0926 22:37:19.803028    1505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6ceec17b-136a-4af6-8734-faa16ecd08bc"
	Sep 26 22:37:23 addons-330674 kubelet[1505]: I0926 22:37:23.803609    1505 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-cdb8s" secret="" err="secret \"gcp-auth\" not found"
	Sep 26 22:37:27 addons-330674 kubelet[1505]: E0926 22:37:27.104717    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926247104349671  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:37:27 addons-330674 kubelet[1505]: E0926 22:37:27.104764    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926247104349671  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:37:30 addons-330674 kubelet[1505]: E0926 22:37:30.808553    1505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6ceec17b-136a-4af6-8734-faa16ecd08bc"
	Sep 26 22:37:37 addons-330674 kubelet[1505]: E0926 22:37:37.111842    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926257110860407  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:37:37 addons-330674 kubelet[1505]: E0926 22:37:37.111866    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926257110860407  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:37:42 addons-330674 kubelet[1505]: E0926 22:37:42.803334    1505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6ceec17b-136a-4af6-8734-faa16ecd08bc"
	Sep 26 22:37:47 addons-330674 kubelet[1505]: E0926 22:37:47.118522    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926267116959879  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:37:47 addons-330674 kubelet[1505]: E0926 22:37:47.118547    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926267116959879  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:37:51 addons-330674 kubelet[1505]: I0926 22:37:51.803229    1505 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 26 22:37:57 addons-330674 kubelet[1505]: E0926 22:37:57.120764    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926277120141154  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:37:57 addons-330674 kubelet[1505]: E0926 22:37:57.121583    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926277120141154  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:38:07 addons-330674 kubelet[1505]: E0926 22:38:07.123965    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926287123683649  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:38:07 addons-330674 kubelet[1505]: E0926 22:38:07.123987    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926287123683649  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:38:07 addons-330674 kubelet[1505]: E0926 22:38:07.171870    1505 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 26 22:38:07 addons-330674 kubelet[1505]: E0926 22:38:07.171929    1505 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 26 22:38:07 addons-330674 kubelet[1505]: E0926 22:38:07.172179    1505 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(cf3126e1-0cb8-4c12-8028-997b82450384): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:38:07 addons-330674 kubelet[1505]: E0926 22:38:07.172270    1505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cf3126e1-0cb8-4c12-8028-997b82450384"
	Sep 26 22:38:17 addons-330674 kubelet[1505]: E0926 22:38:17.126317    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926297125852885  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:38:17 addons-330674 kubelet[1505]: E0926 22:38:17.126369    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926297125852885  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:38:19 addons-330674 kubelet[1505]: E0926 22:38:19.813000    1505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cf3126e1-0cb8-4c12-8028-997b82450384"
	
	
	==> storage-provisioner [7dcddaa36c6f8e064b9e65b380137f789e7379644bdf02c4ce91a8481abe8aed] <==
	W0926 22:37:55.240065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:37:57.243885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:37:57.252334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:37:59.257820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:37:59.264014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:38:01.267888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:38:01.274979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:38:03.279785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:38:03.286883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:38:05.290686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:38:05.296720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:38:07.300329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:38:07.306719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:38:09.311789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:38:09.321670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:38:11.325752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:38:11.332619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:38:13.338661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:38:13.345390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:38:15.349393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:38:15.358537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:38:17.362484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:38:17.370824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:38:19.375637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:38:19.382724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-330674 -n addons-330674
helpers_test.go:269: (dbg) Run:  kubectl --context addons-330674 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-2xzt8 ingress-nginx-admission-patch-vpbtt
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-330674 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-2xzt8 ingress-nginx-admission-patch-vpbtt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-330674 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-2xzt8 ingress-nginx-admission-patch-vpbtt: exit status 1 (91.027207ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-330674/192.168.39.36
	Start Time:       Fri, 26 Sep 2025 22:32:06 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xvdz7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xvdz7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m15s                default-scheduler  Successfully assigned default/nginx to addons-330674
	  Normal   Pulling    87s (x4 over 6m14s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     14s (x4 over 5m44s)  kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     14s (x4 over 5m44s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    2s (x6 over 5m43s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2s (x6 over 5m43s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-330674/192.168.39.36
	Start Time:       Fri, 26 Sep 2025 22:32:18 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pzlv4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-pzlv4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-330674
	  Warning  Failed     75s (x3 over 4m43s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     75s (x3 over 4m43s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    39s (x5 over 4m42s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     39s (x5 over 4m42s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    28s (x4 over 6m2s)   kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-330674/192.168.39.36
	Start Time:       Fri, 26 Sep 2025 22:32:07 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gbhvc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-gbhvc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m14s                 default-scheduler  Successfully assigned default/test-local-path to addons-330674
	  Warning  Failed     105s (x3 over 5m13s)  kubelet            Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     105s (x3 over 5m13s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    76s (x4 over 5m13s)   kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     76s (x4 over 5m13s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    61s (x4 over 6m13s)   kubelet            Pulling image "busybox:stable"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2xzt8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vpbtt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-330674 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-2xzt8 ingress-nginx-admission-patch-vpbtt: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-330674 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-330674 addons disable volumesnapshots --alsologtostderr -v=1: (1.006950479s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-330674 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-330674 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.118505506s)
--- FAIL: TestAddons/parallel/CSI (376.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (232.23s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-330674 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-330674 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-330674 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-330674 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-330674 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-330674 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-330674 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-330674 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-330674 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [8d821a63-845c-4938-9b63-a3f7ca3a23d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:337: TestAddons/parallel/LocalPath: WARNING: pod list for "default" "run=test-local-path" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:962: ***** TestAddons/parallel/LocalPath: pod "run=test-local-path" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:962: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-330674 -n addons-330674
addons_test.go:962: TestAddons/parallel/LocalPath: showing logs for failed pods as of 2025-09-26 22:35:07.699847481 +0000 UTC m=+373.530246953
addons_test.go:962: (dbg) Run:  kubectl --context addons-330674 describe po test-local-path -n default
addons_test.go:962: (dbg) kubectl --context addons-330674 describe po test-local-path -n default:
Name:             test-local-path
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-330674/192.168.39.36
Start Time:       Fri, 26 Sep 2025 22:32:07 +0000
Labels:           run=test-local-path
Annotations:      <none>
Status:           Pending
IP:               10.244.0.29
IPs:
IP:  10.244.0.29
Containers:
busybox:
Container ID:  
Image:         busybox:stable
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gbhvc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
data:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  test-pvc
ReadOnly:   false
kube-api-access-gbhvc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  3m                    default-scheduler  Successfully assigned default/test-local-path to addons-330674
Normal   BackOff    119s                  kubelet            Back-off pulling image "busybox:stable"
Warning  Failed     119s                  kubelet            Error: ImagePullBackOff
Normal   Pulling    106s (x2 over 2m59s)  kubelet            Pulling image "busybox:stable"
Warning  Failed     15s (x2 over 119s)    kubelet            Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     15s (x2 over 119s)    kubelet            Error: ErrImagePull
addons_test.go:962: (dbg) Run:  kubectl --context addons-330674 logs test-local-path -n default
addons_test.go:962: (dbg) Non-zero exit: kubectl --context addons-330674 logs test-local-path -n default: exit status 1 (87.877412ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "test-local-path" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
addons_test.go:962: kubectl --context addons-330674 logs test-local-path -n default: exit status 1
addons_test.go:963: failed waiting for test-local-path pod: run=test-local-path within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-330674 -n addons-330674
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-330674 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-330674 logs -n 25: (1.428461694s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ delete  │ -p download-only-957403                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-957403 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ start   │ -o=json --download-only -p download-only-123956 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                │ download-only-123956 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ delete  │ -p download-only-123956                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-123956 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ delete  │ -p download-only-957403                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-957403 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ delete  │ -p download-only-123956                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-123956 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ start   │ --download-only -p binary-mirror-019280 --alsologtostderr --binary-mirror http://127.0.0.1:43721 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-019280 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ delete  │ -p binary-mirror-019280                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-019280 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ addons  │ enable dashboard -p addons-330674                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-330674                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ start   │ -p addons-330674 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:31 UTC │
	│ addons  │ addons-330674 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:31 UTC │ 26 Sep 25 22:31 UTC │
	│ addons  │ addons-330674 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:31 UTC │ 26 Sep 25 22:31 UTC │
	│ addons  │ enable headlamp -p addons-330674 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:31 UTC │ 26 Sep 25 22:31 UTC │
	│ addons  │ addons-330674 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:31 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-330674 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:31 UTC │ 26 Sep 25 22:31 UTC │
	│ addons  │ addons-330674 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ ip      │ addons-330674 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-330674 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-330674 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-330674 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-330674                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-330674 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-330674 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-330674        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:29:07
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:29:07.131240   10530 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:29:07.131540   10530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:07.131551   10530 out.go:374] Setting ErrFile to fd 2...
	I0926 22:29:07.131555   10530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:07.131846   10530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	I0926 22:29:07.132459   10530 out.go:368] Setting JSON to false
	I0926 22:29:07.133384   10530 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":692,"bootTime":1758925055,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:29:07.133472   10530 start.go:140] virtualization: kvm guest
	I0926 22:29:07.135388   10530 out.go:179] * [addons-330674] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:29:07.136853   10530 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:29:07.136850   10530 notify.go:220] Checking for updates...
	I0926 22:29:07.138284   10530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:29:07.139566   10530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 22:29:07.140695   10530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 22:29:07.142048   10530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:29:07.143327   10530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:29:07.144805   10530 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:29:07.174434   10530 out.go:179] * Using the kvm2 driver based on user configuration
	I0926 22:29:07.175943   10530 start.go:304] selected driver: kvm2
	I0926 22:29:07.175964   10530 start.go:924] validating driver "kvm2" against <nil>
	I0926 22:29:07.175981   10530 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:29:07.176689   10530 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 22:29:07.176795   10530 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 22:29:07.190390   10530 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 22:29:07.190423   10530 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 22:29:07.204480   10530 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 22:29:07.204525   10530 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 22:29:07.204841   10530 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 22:29:07.204881   10530 cni.go:84] Creating CNI manager for ""
	I0926 22:29:07.204938   10530 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 22:29:07.204949   10530 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 22:29:07.205010   10530 start.go:348] cluster config:
	{Name:addons-330674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-330674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:29:07.205117   10530 iso.go:125] acquiring lock: {Name:mk665cb8117fd96bfc46b1e5a29611848cf59d97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 22:29:07.206957   10530 out.go:179] * Starting "addons-330674" primary control-plane node in "addons-330674" cluster
	I0926 22:29:07.208231   10530 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 22:29:07.208282   10530 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0926 22:29:07.208298   10530 cache.go:58] Caching tarball of preloaded images
	I0926 22:29:07.208403   10530 preload.go:172] Found /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0926 22:29:07.208418   10530 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0926 22:29:07.208880   10530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/config.json ...
	I0926 22:29:07.208921   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/config.json: {Name:mk7658ee06b88bc4bb74708f21dcb24d049f1fa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:07.209105   10530 start.go:360] acquireMachinesLock for addons-330674: {Name:mk2abc374bcfc09d0b998f1b70bb443182c23d46 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 22:29:07.209167   10530 start.go:364] duration metric: took 45.106µs to acquireMachinesLock for "addons-330674"
	I0926 22:29:07.209187   10530 start.go:93] Provisioning new machine with config: &{Name:addons-330674 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:addons-330674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 22:29:07.209253   10530 start.go:125] createHost starting for "" (driver="kvm2")
	I0926 22:29:07.210855   10530 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0926 22:29:07.210999   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:07.211043   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:07.224060   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39749
	I0926 22:29:07.224551   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:07.225094   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:07.225117   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:07.225449   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:07.225645   10530 main.go:141] libmachine: (addons-330674) Calling .GetMachineName
	I0926 22:29:07.225795   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:07.225959   10530 start.go:159] libmachine.API.Create for "addons-330674" (driver="kvm2")
	I0926 22:29:07.225987   10530 client.go:168] LocalClient.Create starting
	I0926 22:29:07.226026   10530 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem
	I0926 22:29:07.252167   10530 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem
	I0926 22:29:07.383695   10530 main.go:141] libmachine: Running pre-create checks...
	I0926 22:29:07.383717   10530 main.go:141] libmachine: (addons-330674) Calling .PreCreateCheck
	I0926 22:29:07.384236   10530 main.go:141] libmachine: (addons-330674) Calling .GetConfigRaw
	I0926 22:29:07.384647   10530 main.go:141] libmachine: Creating machine...
	I0926 22:29:07.384660   10530 main.go:141] libmachine: (addons-330674) Calling .Create
	I0926 22:29:07.384806   10530 main.go:141] libmachine: (addons-330674) creating domain...
	I0926 22:29:07.384837   10530 main.go:141] libmachine: (addons-330674) creating network...
	I0926 22:29:07.386337   10530 main.go:141] libmachine: (addons-330674) DBG | found existing default network
	I0926 22:29:07.386536   10530 main.go:141] libmachine: (addons-330674) DBG | <network>
	I0926 22:29:07.386551   10530 main.go:141] libmachine: (addons-330674) DBG |   <name>default</name>
	I0926 22:29:07.386561   10530 main.go:141] libmachine: (addons-330674) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0926 22:29:07.386567   10530 main.go:141] libmachine: (addons-330674) DBG |   <forward mode='nat'>
	I0926 22:29:07.386576   10530 main.go:141] libmachine: (addons-330674) DBG |     <nat>
	I0926 22:29:07.386584   10530 main.go:141] libmachine: (addons-330674) DBG |       <port start='1024' end='65535'/>
	I0926 22:29:07.386593   10530 main.go:141] libmachine: (addons-330674) DBG |     </nat>
	I0926 22:29:07.386600   10530 main.go:141] libmachine: (addons-330674) DBG |   </forward>
	I0926 22:29:07.386609   10530 main.go:141] libmachine: (addons-330674) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0926 22:29:07.386624   10530 main.go:141] libmachine: (addons-330674) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0926 22:29:07.386674   10530 main.go:141] libmachine: (addons-330674) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0926 22:29:07.386695   10530 main.go:141] libmachine: (addons-330674) DBG |     <dhcp>
	I0926 22:29:07.386722   10530 main.go:141] libmachine: (addons-330674) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0926 22:29:07.386749   10530 main.go:141] libmachine: (addons-330674) DBG |     </dhcp>
	I0926 22:29:07.386765   10530 main.go:141] libmachine: (addons-330674) DBG |   </ip>
	I0926 22:29:07.386773   10530 main.go:141] libmachine: (addons-330674) DBG | </network>
	I0926 22:29:07.386781   10530 main.go:141] libmachine: (addons-330674) DBG | 
	I0926 22:29:07.387226   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:07.387079   10558 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000136b0}
	I0926 22:29:07.387252   10530 main.go:141] libmachine: (addons-330674) DBG | defining private network:
	I0926 22:29:07.387264   10530 main.go:141] libmachine: (addons-330674) DBG | 
	I0926 22:29:07.387271   10530 main.go:141] libmachine: (addons-330674) DBG | <network>
	I0926 22:29:07.387280   10530 main.go:141] libmachine: (addons-330674) DBG |   <name>mk-addons-330674</name>
	I0926 22:29:07.387287   10530 main.go:141] libmachine: (addons-330674) DBG |   <dns enable='no'/>
	I0926 22:29:07.387305   10530 main.go:141] libmachine: (addons-330674) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0926 22:29:07.387341   10530 main.go:141] libmachine: (addons-330674) DBG |     <dhcp>
	I0926 22:29:07.387364   10530 main.go:141] libmachine: (addons-330674) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0926 22:29:07.387386   10530 main.go:141] libmachine: (addons-330674) DBG |     </dhcp>
	I0926 22:29:07.387410   10530 main.go:141] libmachine: (addons-330674) DBG |   </ip>
	I0926 22:29:07.387419   10530 main.go:141] libmachine: (addons-330674) DBG | </network>
	I0926 22:29:07.387423   10530 main.go:141] libmachine: (addons-330674) DBG | 
	I0926 22:29:07.393131   10530 main.go:141] libmachine: (addons-330674) DBG | creating private network mk-addons-330674 192.168.39.0/24...
	I0926 22:29:07.460176   10530 main.go:141] libmachine: (addons-330674) DBG | private network mk-addons-330674 192.168.39.0/24 created
	I0926 22:29:07.460404   10530 main.go:141] libmachine: (addons-330674) DBG | <network>
	I0926 22:29:07.460423   10530 main.go:141] libmachine: (addons-330674) DBG |   <name>mk-addons-330674</name>
	I0926 22:29:07.460433   10530 main.go:141] libmachine: (addons-330674) setting up store path in /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674 ...
	I0926 22:29:07.460457   10530 main.go:141] libmachine: (addons-330674) DBG |   <uuid>e70fd5af-70d4-4d49-913b-79a95d8fca9c</uuid>
	I0926 22:29:07.460472   10530 main.go:141] libmachine: (addons-330674) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I0926 22:29:07.460480   10530 main.go:141] libmachine: (addons-330674) DBG |   <mac address='52:54:00:a6:90:55'/>
	I0926 22:29:07.460493   10530 main.go:141] libmachine: (addons-330674) DBG |   <dns enable='no'/>
	I0926 22:29:07.460501   10530 main.go:141] libmachine: (addons-330674) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0926 22:29:07.460527   10530 main.go:141] libmachine: (addons-330674) building disk image from file:///home/jenkins/minikube-integration/21642-6020/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0926 22:29:07.460539   10530 main.go:141] libmachine: (addons-330674) DBG |     <dhcp>
	I0926 22:29:07.460549   10530 main.go:141] libmachine: (addons-330674) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0926 22:29:07.460556   10530 main.go:141] libmachine: (addons-330674) DBG |     </dhcp>
	I0926 22:29:07.460567   10530 main.go:141] libmachine: (addons-330674) DBG |   </ip>
	I0926 22:29:07.460574   10530 main.go:141] libmachine: (addons-330674) DBG | </network>
	I0926 22:29:07.460593   10530 main.go:141] libmachine: (addons-330674) Downloading /home/jenkins/minikube-integration/21642-6020/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21642-6020/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I0926 22:29:07.460625   10530 main.go:141] libmachine: (addons-330674) DBG | 
	I0926 22:29:07.460644   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:07.460403   10558 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 22:29:07.709924   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:07.709791   10558 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa...
	I0926 22:29:08.463909   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:08.463682   10558 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/addons-330674.rawdisk...
	I0926 22:29:08.463957   10530 main.go:141] libmachine: (addons-330674) setting executable bit set on /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674 (perms=drwx------)
	I0926 22:29:08.463983   10530 main.go:141] libmachine: (addons-330674) DBG | Writing magic tar header
	I0926 22:29:08.463998   10530 main.go:141] libmachine: (addons-330674) DBG | Writing SSH key tar header
	I0926 22:29:08.464006   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:08.463801   10558 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674 ...
	I0926 22:29:08.464019   10530 main.go:141] libmachine: (addons-330674) setting executable bit set on /home/jenkins/minikube-integration/21642-6020/.minikube/machines (perms=drwxr-xr-x)
	I0926 22:29:08.464034   10530 main.go:141] libmachine: (addons-330674) setting executable bit set on /home/jenkins/minikube-integration/21642-6020/.minikube (perms=drwxr-xr-x)
	I0926 22:29:08.464052   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674
	I0926 22:29:08.464064   10530 main.go:141] libmachine: (addons-330674) setting executable bit set on /home/jenkins/minikube-integration/21642-6020 (perms=drwxrwxr-x)
	I0926 22:29:08.464074   10530 main.go:141] libmachine: (addons-330674) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0926 22:29:08.464080   10530 main.go:141] libmachine: (addons-330674) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0926 22:29:08.464099   10530 main.go:141] libmachine: (addons-330674) defining domain...
	I0926 22:29:08.464155   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21642-6020/.minikube/machines
	I0926 22:29:08.464176   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 22:29:08.464184   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21642-6020
	I0926 22:29:08.464190   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0926 22:29:08.464208   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home/jenkins
	I0926 22:29:08.464242   10530 main.go:141] libmachine: (addons-330674) DBG | checking permissions on dir: /home
	I0926 22:29:08.464263   10530 main.go:141] libmachine: (addons-330674) DBG | skipping /home - not owner
	I0926 22:29:08.465374   10530 main.go:141] libmachine: (addons-330674) defining domain using XML: 
	I0926 22:29:08.465403   10530 main.go:141] libmachine: (addons-330674) <domain type='kvm'>
	I0926 22:29:08.465410   10530 main.go:141] libmachine: (addons-330674)   <name>addons-330674</name>
	I0926 22:29:08.465415   10530 main.go:141] libmachine: (addons-330674)   <memory unit='MiB'>4096</memory>
	I0926 22:29:08.465420   10530 main.go:141] libmachine: (addons-330674)   <vcpu>2</vcpu>
	I0926 22:29:08.465424   10530 main.go:141] libmachine: (addons-330674)   <features>
	I0926 22:29:08.465428   10530 main.go:141] libmachine: (addons-330674)     <acpi/>
	I0926 22:29:08.465432   10530 main.go:141] libmachine: (addons-330674)     <apic/>
	I0926 22:29:08.465438   10530 main.go:141] libmachine: (addons-330674)     <pae/>
	I0926 22:29:08.465444   10530 main.go:141] libmachine: (addons-330674)   </features>
	I0926 22:29:08.465449   10530 main.go:141] libmachine: (addons-330674)   <cpu mode='host-passthrough'>
	I0926 22:29:08.465453   10530 main.go:141] libmachine: (addons-330674)   </cpu>
	I0926 22:29:08.465458   10530 main.go:141] libmachine: (addons-330674)   <os>
	I0926 22:29:08.465462   10530 main.go:141] libmachine: (addons-330674)     <type>hvm</type>
	I0926 22:29:08.465467   10530 main.go:141] libmachine: (addons-330674)     <boot dev='cdrom'/>
	I0926 22:29:08.465471   10530 main.go:141] libmachine: (addons-330674)     <boot dev='hd'/>
	I0926 22:29:08.465481   10530 main.go:141] libmachine: (addons-330674)     <bootmenu enable='no'/>
	I0926 22:29:08.465491   10530 main.go:141] libmachine: (addons-330674)   </os>
	I0926 22:29:08.465499   10530 main.go:141] libmachine: (addons-330674)   <devices>
	I0926 22:29:08.465506   10530 main.go:141] libmachine: (addons-330674)     <disk type='file' device='cdrom'>
	I0926 22:29:08.465541   10530 main.go:141] libmachine: (addons-330674)       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/boot2docker.iso'/>
	I0926 22:29:08.465556   10530 main.go:141] libmachine: (addons-330674)       <target dev='hdc' bus='scsi'/>
	I0926 22:29:08.465565   10530 main.go:141] libmachine: (addons-330674)       <readonly/>
	I0926 22:29:08.465571   10530 main.go:141] libmachine: (addons-330674)     </disk>
	I0926 22:29:08.465580   10530 main.go:141] libmachine: (addons-330674)     <disk type='file' device='disk'>
	I0926 22:29:08.465592   10530 main.go:141] libmachine: (addons-330674)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0926 22:29:08.465600   10530 main.go:141] libmachine: (addons-330674)       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/addons-330674.rawdisk'/>
	I0926 22:29:08.465607   10530 main.go:141] libmachine: (addons-330674)       <target dev='hda' bus='virtio'/>
	I0926 22:29:08.465612   10530 main.go:141] libmachine: (addons-330674)     </disk>
	I0926 22:29:08.465616   10530 main.go:141] libmachine: (addons-330674)     <interface type='network'>
	I0926 22:29:08.465624   10530 main.go:141] libmachine: (addons-330674)       <source network='mk-addons-330674'/>
	I0926 22:29:08.465630   10530 main.go:141] libmachine: (addons-330674)       <model type='virtio'/>
	I0926 22:29:08.465639   10530 main.go:141] libmachine: (addons-330674)     </interface>
	I0926 22:29:08.465648   10530 main.go:141] libmachine: (addons-330674)     <interface type='network'>
	I0926 22:29:08.465665   10530 main.go:141] libmachine: (addons-330674)       <source network='default'/>
	I0926 22:29:08.465676   10530 main.go:141] libmachine: (addons-330674)       <model type='virtio'/>
	I0926 22:29:08.465681   10530 main.go:141] libmachine: (addons-330674)     </interface>
	I0926 22:29:08.465685   10530 main.go:141] libmachine: (addons-330674)     <serial type='pty'>
	I0926 22:29:08.465690   10530 main.go:141] libmachine: (addons-330674)       <target port='0'/>
	I0926 22:29:08.465696   10530 main.go:141] libmachine: (addons-330674)     </serial>
	I0926 22:29:08.465706   10530 main.go:141] libmachine: (addons-330674)     <console type='pty'>
	I0926 22:29:08.465714   10530 main.go:141] libmachine: (addons-330674)       <target type='serial' port='0'/>
	I0926 22:29:08.465740   10530 main.go:141] libmachine: (addons-330674)     </console>
	I0926 22:29:08.465754   10530 main.go:141] libmachine: (addons-330674)     <rng model='virtio'>
	I0926 22:29:08.465774   10530 main.go:141] libmachine: (addons-330674)       <backend model='random'>/dev/random</backend>
	I0926 22:29:08.465783   10530 main.go:141] libmachine: (addons-330674)     </rng>
	I0926 22:29:08.465790   10530 main.go:141] libmachine: (addons-330674)   </devices>
	I0926 22:29:08.465796   10530 main.go:141] libmachine: (addons-330674) </domain>
	I0926 22:29:08.465805   10530 main.go:141] libmachine: (addons-330674) 
	I0926 22:29:08.473977   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:84:c4:98 in network default
	I0926 22:29:08.474678   10530 main.go:141] libmachine: (addons-330674) starting domain...
	I0926 22:29:08.474698   10530 main.go:141] libmachine: (addons-330674) ensuring networks are active...
	I0926 22:29:08.474707   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:08.475451   10530 main.go:141] libmachine: (addons-330674) Ensuring network default is active
	I0926 22:29:08.475817   10530 main.go:141] libmachine: (addons-330674) Ensuring network mk-addons-330674 is active
	I0926 22:29:08.476435   10530 main.go:141] libmachine: (addons-330674) getting domain XML...
	I0926 22:29:08.477581   10530 main.go:141] libmachine: (addons-330674) DBG | starting domain XML:
	I0926 22:29:08.477607   10530 main.go:141] libmachine: (addons-330674) DBG | <domain type='kvm'>
	I0926 22:29:08.477626   10530 main.go:141] libmachine: (addons-330674) DBG |   <name>addons-330674</name>
	I0926 22:29:08.477633   10530 main.go:141] libmachine: (addons-330674) DBG |   <uuid>0270d5ce-774d-47cc-84b7-b73291b9eb86</uuid>
	I0926 22:29:08.477643   10530 main.go:141] libmachine: (addons-330674) DBG |   <memory unit='KiB'>4194304</memory>
	I0926 22:29:08.477648   10530 main.go:141] libmachine: (addons-330674) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I0926 22:29:08.477654   10530 main.go:141] libmachine: (addons-330674) DBG |   <vcpu placement='static'>2</vcpu>
	I0926 22:29:08.477661   10530 main.go:141] libmachine: (addons-330674) DBG |   <os>
	I0926 22:29:08.477680   10530 main.go:141] libmachine: (addons-330674) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0926 22:29:08.477689   10530 main.go:141] libmachine: (addons-330674) DBG |     <boot dev='cdrom'/>
	I0926 22:29:08.477699   10530 main.go:141] libmachine: (addons-330674) DBG |     <boot dev='hd'/>
	I0926 22:29:08.477710   10530 main.go:141] libmachine: (addons-330674) DBG |     <bootmenu enable='no'/>
	I0926 22:29:08.477719   10530 main.go:141] libmachine: (addons-330674) DBG |   </os>
	I0926 22:29:08.477724   10530 main.go:141] libmachine: (addons-330674) DBG |   <features>
	I0926 22:29:08.477729   10530 main.go:141] libmachine: (addons-330674) DBG |     <acpi/>
	I0926 22:29:08.477735   10530 main.go:141] libmachine: (addons-330674) DBG |     <apic/>
	I0926 22:29:08.477740   10530 main.go:141] libmachine: (addons-330674) DBG |     <pae/>
	I0926 22:29:08.477744   10530 main.go:141] libmachine: (addons-330674) DBG |   </features>
	I0926 22:29:08.477753   10530 main.go:141] libmachine: (addons-330674) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0926 22:29:08.477770   10530 main.go:141] libmachine: (addons-330674) DBG |   <clock offset='utc'/>
	I0926 22:29:08.477812   10530 main.go:141] libmachine: (addons-330674) DBG |   <on_poweroff>destroy</on_poweroff>
	I0926 22:29:08.477847   10530 main.go:141] libmachine: (addons-330674) DBG |   <on_reboot>restart</on_reboot>
	I0926 22:29:08.477862   10530 main.go:141] libmachine: (addons-330674) DBG |   <on_crash>destroy</on_crash>
	I0926 22:29:08.477872   10530 main.go:141] libmachine: (addons-330674) DBG |   <devices>
	I0926 22:29:08.477883   10530 main.go:141] libmachine: (addons-330674) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0926 22:29:08.477893   10530 main.go:141] libmachine: (addons-330674) DBG |     <disk type='file' device='cdrom'>
	I0926 22:29:08.477901   10530 main.go:141] libmachine: (addons-330674) DBG |       <driver name='qemu' type='raw'/>
	I0926 22:29:08.477910   10530 main.go:141] libmachine: (addons-330674) DBG |       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/boot2docker.iso'/>
	I0926 22:29:08.477939   10530 main.go:141] libmachine: (addons-330674) DBG |       <target dev='hdc' bus='scsi'/>
	I0926 22:29:08.477962   10530 main.go:141] libmachine: (addons-330674) DBG |       <readonly/>
	I0926 22:29:08.477976   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0926 22:29:08.477987   10530 main.go:141] libmachine: (addons-330674) DBG |     </disk>
	I0926 22:29:08.477997   10530 main.go:141] libmachine: (addons-330674) DBG |     <disk type='file' device='disk'>
	I0926 22:29:08.478009   10530 main.go:141] libmachine: (addons-330674) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0926 22:29:08.478027   10530 main.go:141] libmachine: (addons-330674) DBG |       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/addons-330674.rawdisk'/>
	I0926 22:29:08.478038   10530 main.go:141] libmachine: (addons-330674) DBG |       <target dev='hda' bus='virtio'/>
	I0926 22:29:08.478054   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0926 22:29:08.478064   10530 main.go:141] libmachine: (addons-330674) DBG |     </disk>
	I0926 22:29:08.478085   10530 main.go:141] libmachine: (addons-330674) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0926 22:29:08.478104   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0926 22:29:08.478118   10530 main.go:141] libmachine: (addons-330674) DBG |     </controller>
	I0926 22:29:08.478135   10530 main.go:141] libmachine: (addons-330674) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0926 22:29:08.478148   10530 main.go:141] libmachine: (addons-330674) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0926 22:29:08.478167   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0926 22:29:08.478178   10530 main.go:141] libmachine: (addons-330674) DBG |     </controller>
	I0926 22:29:08.478195   10530 main.go:141] libmachine: (addons-330674) DBG |     <interface type='network'>
	I0926 22:29:08.478213   10530 main.go:141] libmachine: (addons-330674) DBG |       <mac address='52:54:00:fe:3c:4a'/>
	I0926 22:29:08.478223   10530 main.go:141] libmachine: (addons-330674) DBG |       <source network='mk-addons-330674'/>
	I0926 22:29:08.478233   10530 main.go:141] libmachine: (addons-330674) DBG |       <model type='virtio'/>
	I0926 22:29:08.478243   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0926 22:29:08.478252   10530 main.go:141] libmachine: (addons-330674) DBG |     </interface>
	I0926 22:29:08.478264   10530 main.go:141] libmachine: (addons-330674) DBG |     <interface type='network'>
	I0926 22:29:08.478275   10530 main.go:141] libmachine: (addons-330674) DBG |       <mac address='52:54:00:84:c4:98'/>
	I0926 22:29:08.478286   10530 main.go:141] libmachine: (addons-330674) DBG |       <source network='default'/>
	I0926 22:29:08.478308   10530 main.go:141] libmachine: (addons-330674) DBG |       <model type='virtio'/>
	I0926 22:29:08.478322   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0926 22:29:08.478330   10530 main.go:141] libmachine: (addons-330674) DBG |     </interface>
	I0926 22:29:08.478350   10530 main.go:141] libmachine: (addons-330674) DBG |     <serial type='pty'>
	I0926 22:29:08.478362   10530 main.go:141] libmachine: (addons-330674) DBG |       <target type='isa-serial' port='0'>
	I0926 22:29:08.478459   10530 main.go:141] libmachine: (addons-330674) DBG |         <model name='isa-serial'/>
	I0926 22:29:08.478491   10530 main.go:141] libmachine: (addons-330674) DBG |       </target>
	I0926 22:29:08.478512   10530 main.go:141] libmachine: (addons-330674) DBG |     </serial>
	I0926 22:29:08.478522   10530 main.go:141] libmachine: (addons-330674) DBG |     <console type='pty'>
	I0926 22:29:08.478537   10530 main.go:141] libmachine: (addons-330674) DBG |       <target type='serial' port='0'/>
	I0926 22:29:08.478548   10530 main.go:141] libmachine: (addons-330674) DBG |     </console>
	I0926 22:29:08.478564   10530 main.go:141] libmachine: (addons-330674) DBG |     <input type='mouse' bus='ps2'/>
	I0926 22:29:08.478581   10530 main.go:141] libmachine: (addons-330674) DBG |     <input type='keyboard' bus='ps2'/>
	I0926 22:29:08.478595   10530 main.go:141] libmachine: (addons-330674) DBG |     <audio id='1' type='none'/>
	I0926 22:29:08.478607   10530 main.go:141] libmachine: (addons-330674) DBG |     <memballoon model='virtio'>
	I0926 22:29:08.478622   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0926 22:29:08.478634   10530 main.go:141] libmachine: (addons-330674) DBG |     </memballoon>
	I0926 22:29:08.478649   10530 main.go:141] libmachine: (addons-330674) DBG |     <rng model='virtio'>
	I0926 22:29:08.478659   10530 main.go:141] libmachine: (addons-330674) DBG |       <backend model='random'>/dev/random</backend>
	I0926 22:29:08.478667   10530 main.go:141] libmachine: (addons-330674) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0926 22:29:08.478674   10530 main.go:141] libmachine: (addons-330674) DBG |     </rng>
	I0926 22:29:08.478679   10530 main.go:141] libmachine: (addons-330674) DBG |   </devices>
	I0926 22:29:08.478685   10530 main.go:141] libmachine: (addons-330674) DBG | </domain>
	I0926 22:29:08.478692   10530 main.go:141] libmachine: (addons-330674) DBG | 
	I0926 22:29:09.794414   10530 main.go:141] libmachine: (addons-330674) waiting for domain to start...
	I0926 22:29:09.795757   10530 main.go:141] libmachine: (addons-330674) domain is now running
	I0926 22:29:09.795779   10530 main.go:141] libmachine: (addons-330674) waiting for IP...
	I0926 22:29:09.796619   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:09.797072   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:09.797094   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:09.797358   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:09.797434   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:09.797363   10558 retry.go:31] will retry after 273.626577ms: waiting for domain to come up
	I0926 22:29:10.073299   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:10.073781   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:10.073821   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:10.074074   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:10.074127   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:10.074070   10558 retry.go:31] will retry after 328.642045ms: waiting for domain to come up
	I0926 22:29:10.404766   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:10.405330   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:10.405358   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:10.405650   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:10.405699   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:10.405633   10558 retry.go:31] will retry after 438.92032ms: waiting for domain to come up
	I0926 22:29:10.846204   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:10.846643   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:10.846672   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:10.846906   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:10.846933   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:10.846871   10558 retry.go:31] will retry after 558.153234ms: waiting for domain to come up
	I0926 22:29:11.406899   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:11.407422   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:11.407438   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:11.407834   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:11.407882   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:11.407800   10558 retry.go:31] will retry after 539.111569ms: waiting for domain to come up
	I0926 22:29:11.948608   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:11.949098   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:11.949119   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:11.949455   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:11.949481   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:11.949435   10558 retry.go:31] will retry after 832.890938ms: waiting for domain to come up
	I0926 22:29:12.784343   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:12.784868   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:12.784895   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:12.785122   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:12.785150   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:12.785094   10558 retry.go:31] will retry after 734.304778ms: waiting for domain to come up
	I0926 22:29:13.521093   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:13.521705   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:13.521742   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:13.521961   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:13.521985   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:13.521931   10558 retry.go:31] will retry after 1.380433504s: waiting for domain to come up
	I0926 22:29:14.904439   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:14.904924   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:14.904953   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:14.905190   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:14.905218   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:14.905169   10558 retry.go:31] will retry after 1.496759703s: waiting for domain to come up
	I0926 22:29:16.404048   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:16.404524   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:16.404544   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:16.404780   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:16.404815   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:16.404749   10558 retry.go:31] will retry after 2.080327572s: waiting for domain to come up
	I0926 22:29:18.486681   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:18.487121   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:18.487136   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:18.487537   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:18.487640   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:18.487542   10558 retry.go:31] will retry after 2.860875374s: waiting for domain to come up
	I0926 22:29:21.351807   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:21.352511   10530 main.go:141] libmachine: (addons-330674) DBG | no network interface addresses found for domain addons-330674 (source=lease)
	I0926 22:29:21.352546   10530 main.go:141] libmachine: (addons-330674) DBG | trying to list again with source=arp
	I0926 22:29:21.352882   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find current IP address of domain addons-330674 in network mk-addons-330674 (interfaces detected: [])
	I0926 22:29:21.352912   10530 main.go:141] libmachine: (addons-330674) DBG | I0926 22:29:21.352841   10558 retry.go:31] will retry after 3.24989466s: waiting for domain to come up
	I0926 22:29:24.605898   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:24.606496   10530 main.go:141] libmachine: (addons-330674) found domain IP: 192.168.39.36
	I0926 22:29:24.606514   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has current primary IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:24.606520   10530 main.go:141] libmachine: (addons-330674) reserving static IP address...
	I0926 22:29:24.607058   10530 main.go:141] libmachine: (addons-330674) DBG | unable to find host DHCP lease matching {name: "addons-330674", mac: "52:54:00:fe:3c:4a", ip: "192.168.39.36"} in network mk-addons-330674
	I0926 22:29:24.801972   10530 main.go:141] libmachine: (addons-330674) DBG | Getting to WaitForSSH function...
	I0926 22:29:24.802012   10530 main.go:141] libmachine: (addons-330674) reserved static IP address 192.168.39.36 for domain addons-330674
	I0926 22:29:24.802021   10530 main.go:141] libmachine: (addons-330674) waiting for SSH...
	I0926 22:29:24.805483   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:24.805987   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:24.806013   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:24.806269   10530 main.go:141] libmachine: (addons-330674) DBG | Using SSH client type: external
	I0926 22:29:24.806295   10530 main.go:141] libmachine: (addons-330674) DBG | Using SSH private key: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa (-rw-------)
	I0926 22:29:24.806338   10530 main.go:141] libmachine: (addons-330674) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0926 22:29:24.806355   10530 main.go:141] libmachine: (addons-330674) DBG | About to run SSH command:
	I0926 22:29:24.806382   10530 main.go:141] libmachine: (addons-330674) DBG | exit 0
	I0926 22:29:24.945871   10530 main.go:141] libmachine: (addons-330674) DBG | SSH cmd err, output: <nil>: 
	I0926 22:29:24.946187   10530 main.go:141] libmachine: (addons-330674) domain creation complete
	I0926 22:29:24.946531   10530 main.go:141] libmachine: (addons-330674) Calling .GetConfigRaw
	I0926 22:29:24.947223   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:24.947466   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:24.947633   10530 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0926 22:29:24.947649   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:24.949328   10530 main.go:141] libmachine: Detecting operating system of created instance...
	I0926 22:29:24.949342   10530 main.go:141] libmachine: Waiting for SSH to be available...
	I0926 22:29:24.949347   10530 main.go:141] libmachine: Getting to WaitForSSH function...
	I0926 22:29:24.949352   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:24.952234   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:24.952698   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:24.952711   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:24.952971   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:24.953145   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:24.953333   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:24.953464   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:24.953611   10530 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:24.953903   10530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0926 22:29:24.953918   10530 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0926 22:29:25.060937   10530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 22:29:25.060966   10530 main.go:141] libmachine: Detecting the provisioner...
	I0926 22:29:25.060976   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:25.064297   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.064652   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.064684   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.064929   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:25.065163   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.065357   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.065558   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:25.065802   10530 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.066092   10530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0926 22:29:25.066109   10530 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0926 22:29:25.175605   10530 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0926 22:29:25.175676   10530 main.go:141] libmachine: found compatible host: buildroot
	I0926 22:29:25.175689   10530 main.go:141] libmachine: Provisioning with buildroot...
	I0926 22:29:25.175700   10530 main.go:141] libmachine: (addons-330674) Calling .GetMachineName
	I0926 22:29:25.175985   10530 buildroot.go:166] provisioning hostname "addons-330674"
	I0926 22:29:25.176011   10530 main.go:141] libmachine: (addons-330674) Calling .GetMachineName
	I0926 22:29:25.176150   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:25.179382   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.179854   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.179885   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.180043   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:25.180247   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.180432   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.180575   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:25.180767   10530 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.181010   10530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0926 22:29:25.181024   10530 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-330674 && echo "addons-330674" | sudo tee /etc/hostname
	I0926 22:29:25.307949   10530 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-330674
	
	I0926 22:29:25.307974   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:25.311584   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.312035   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.312067   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.312266   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:25.312427   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.312555   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.312671   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:25.312801   10530 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.313027   10530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0926 22:29:25.313044   10530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-330674' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-330674/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-330674' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 22:29:25.450755   10530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 22:29:25.450809   10530 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21642-6020/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-6020/.minikube}
	I0926 22:29:25.450872   10530 buildroot.go:174] setting up certificates
	I0926 22:29:25.450885   10530 provision.go:84] configureAuth start
	I0926 22:29:25.450905   10530 main.go:141] libmachine: (addons-330674) Calling .GetMachineName
	I0926 22:29:25.451192   10530 main.go:141] libmachine: (addons-330674) Calling .GetIP
	I0926 22:29:25.454688   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.455254   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.455279   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.455519   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:25.458753   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.459271   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.459303   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.459556   10530 provision.go:143] copyHostCerts
	I0926 22:29:25.459631   10530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem (1082 bytes)
	I0926 22:29:25.459785   10530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem (1123 bytes)
	I0926 22:29:25.459921   10530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem (1675 bytes)
	I0926 22:29:25.459995   10530 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem org=jenkins.addons-330674 san=[127.0.0.1 192.168.39.36 addons-330674 localhost minikube]
	I0926 22:29:25.636851   10530 provision.go:177] copyRemoteCerts
	I0926 22:29:25.636910   10530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 22:29:25.636931   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:25.640198   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.640611   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.640647   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.640899   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:25.641105   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.641276   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:25.641432   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:25.727740   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 22:29:25.759430   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 22:29:25.790642   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 22:29:25.824890   10530 provision.go:87] duration metric: took 373.989122ms to configureAuth
	I0926 22:29:25.824935   10530 buildroot.go:189] setting minikube options for container-runtime
	I0926 22:29:25.825088   10530 config.go:182] Loaded profile config "addons-330674": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:29:25.825156   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:25.828108   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.828481   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:25.828519   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:25.828682   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:25.828889   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.829082   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:25.829206   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:25.829377   10530 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:25.829561   10530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0926 22:29:25.829574   10530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0926 22:29:26.083637   10530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0926 22:29:26.083688   10530 main.go:141] libmachine: Checking connection to Docker...
	I0926 22:29:26.083699   10530 main.go:141] libmachine: (addons-330674) Calling .GetURL
	I0926 22:29:26.084980   10530 main.go:141] libmachine: (addons-330674) DBG | using libvirt version 8000000
	I0926 22:29:26.087617   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.088034   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.088058   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.088261   10530 main.go:141] libmachine: Docker is up and running!
	I0926 22:29:26.088277   10530 main.go:141] libmachine: Reticulating splines...
	I0926 22:29:26.088285   10530 client.go:171] duration metric: took 18.862290788s to LocalClient.Create
	I0926 22:29:26.088309   10530 start.go:167] duration metric: took 18.862351466s to libmachine.API.Create "addons-330674"
	I0926 22:29:26.088318   10530 start.go:293] postStartSetup for "addons-330674" (driver="kvm2")
	I0926 22:29:26.088328   10530 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 22:29:26.088344   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:26.088646   10530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 22:29:26.088676   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:26.091157   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.091558   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.091604   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.091759   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:26.091987   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:26.092140   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:26.092320   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:26.179094   10530 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 22:29:26.184339   10530 info.go:137] Remote host: Buildroot 2025.02
	I0926 22:29:26.184372   10530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-6020/.minikube/addons for local assets ...
	I0926 22:29:26.184463   10530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-6020/.minikube/files for local assets ...
	I0926 22:29:26.184504   10530 start.go:296] duration metric: took 96.180038ms for postStartSetup
	I0926 22:29:26.184545   10530 main.go:141] libmachine: (addons-330674) Calling .GetConfigRaw
	I0926 22:29:26.185197   10530 main.go:141] libmachine: (addons-330674) Calling .GetIP
	I0926 22:29:26.187971   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.188443   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.188476   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.188748   10530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/config.json ...
	I0926 22:29:26.188966   10530 start.go:128] duration metric: took 18.979703505s to createHost
	I0926 22:29:26.188989   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:26.191408   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.191793   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.191847   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.192051   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:26.192216   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:26.192328   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:26.192574   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:26.192739   10530 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:26.192982   10530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0926 22:29:26.192997   10530 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 22:29:26.302967   10530 main.go:141] libmachine: SSH cmd err, output: <nil>: 1758925766.258154674
	
	I0926 22:29:26.302991   10530 fix.go:216] guest clock: 1758925766.258154674
	I0926 22:29:26.302998   10530 fix.go:229] Guest: 2025-09-26 22:29:26.258154674 +0000 UTC Remote: 2025-09-26 22:29:26.188978954 +0000 UTC m=+19.093162175 (delta=69.17572ms)
	I0926 22:29:26.303017   10530 fix.go:200] guest clock delta is within tolerance: 69.17572ms
	I0926 22:29:26.303021   10530 start.go:83] releasing machines lock for "addons-330674", held for 19.093844163s
	I0926 22:29:26.303039   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:26.303314   10530 main.go:141] libmachine: (addons-330674) Calling .GetIP
	I0926 22:29:26.306248   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.306677   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.306699   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.306871   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:26.307420   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:26.307668   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:26.307796   10530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 22:29:26.307854   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:26.307908   10530 ssh_runner.go:195] Run: cat /version.json
	I0926 22:29:26.307928   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:26.311189   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.311234   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.311728   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.311762   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.311798   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:26.311816   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:26.312009   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:26.312028   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:26.312218   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:26.312225   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:26.312441   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:26.312444   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:26.312617   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:26.312624   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:26.424051   10530 ssh_runner.go:195] Run: systemctl --version
	I0926 22:29:26.430969   10530 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0926 22:29:26.610848   10530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 22:29:26.618574   10530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 22:29:26.618644   10530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 22:29:26.640335   10530 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 22:29:26.640361   10530 start.go:495] detecting cgroup driver to use...
	I0926 22:29:26.640424   10530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 22:29:26.662226   10530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 22:29:26.680146   10530 docker.go:218] disabling cri-docker service (if available) ...
	I0926 22:29:26.680210   10530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0926 22:29:26.699354   10530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0926 22:29:26.717303   10530 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0926 22:29:26.869422   10530 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0926 22:29:27.077850   10530 docker.go:234] disabling docker service ...
	I0926 22:29:27.077946   10530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0926 22:29:27.096325   10530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0926 22:29:27.112839   10530 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0926 22:29:27.280087   10530 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0926 22:29:27.428409   10530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 22:29:27.454379   10530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 22:29:27.481918   10530 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0926 22:29:27.481978   10530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.496018   10530 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0926 22:29:27.496545   10530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.511695   10530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.526954   10530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.542152   10530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 22:29:27.556957   10530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.570979   10530 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.593384   10530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:27.606999   10530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 22:29:27.619008   10530 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 22:29:27.619079   10530 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 22:29:27.643401   10530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 22:29:27.659682   10530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:27.806017   10530 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0926 22:29:27.921593   10530 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0926 22:29:27.921704   10530 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0926 22:29:27.927956   10530 start.go:563] Will wait 60s for crictl version
	I0926 22:29:27.928056   10530 ssh_runner.go:195] Run: which crictl
	I0926 22:29:27.932464   10530 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 22:29:27.976200   10530 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0926 22:29:27.976335   10530 ssh_runner.go:195] Run: crio --version
	I0926 22:29:28.008853   10530 ssh_runner.go:195] Run: crio --version
	I0926 22:29:28.043862   10530 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0926 22:29:28.045740   10530 main.go:141] libmachine: (addons-330674) Calling .GetIP
	I0926 22:29:28.048806   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:28.049367   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:28.049401   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:28.049696   10530 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0926 22:29:28.054603   10530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 22:29:28.071477   10530 kubeadm.go:883] updating cluster {Name:addons-330674 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-330
674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 22:29:28.071590   10530 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 22:29:28.071633   10530 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 22:29:28.118674   10530 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0926 22:29:28.118764   10530 ssh_runner.go:195] Run: which lz4
	I0926 22:29:28.123934   10530 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0926 22:29:28.129383   10530 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0926 22:29:28.129421   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0926 22:29:29.768442   10530 crio.go:462] duration metric: took 1.644542886s to copy over tarball
	I0926 22:29:29.768520   10530 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0926 22:29:31.498224   10530 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.729674115s)
	I0926 22:29:31.498261   10530 crio.go:469] duration metric: took 1.729788969s to extract the tarball
	I0926 22:29:31.498271   10530 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0926 22:29:31.542261   10530 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 22:29:31.589755   10530 crio.go:514] all images are preloaded for cri-o runtime.
	I0926 22:29:31.589778   10530 cache_images.go:85] Images are preloaded, skipping loading
	I0926 22:29:31.589786   10530 kubeadm.go:934] updating node { 192.168.39.36 8443 v1.34.0 crio true true} ...
	I0926 22:29:31.589917   10530 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-330674 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-330674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 22:29:31.590004   10530 ssh_runner.go:195] Run: crio config
	I0926 22:29:31.637842   10530 cni.go:84] Creating CNI manager for ""
	I0926 22:29:31.637869   10530 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 22:29:31.637886   10530 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 22:29:31.637913   10530 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.36 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-330674 NodeName:addons-330674 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 22:29:31.638060   10530 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-330674"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.36"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.36"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 22:29:31.638136   10530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0926 22:29:31.651088   10530 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 22:29:31.651173   10530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 22:29:31.664460   10530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0926 22:29:31.688820   10530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 22:29:31.711364   10530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0926 22:29:31.734280   10530 ssh_runner.go:195] Run: grep 192.168.39.36	control-plane.minikube.internal$ /etc/hosts
	I0926 22:29:31.738852   10530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 22:29:31.755229   10530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:31.902308   10530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 22:29:31.937034   10530 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674 for IP: 192.168.39.36
	I0926 22:29:31.937058   10530 certs.go:195] generating shared ca certs ...
	I0926 22:29:31.937074   10530 certs.go:227] acquiring lock for ca certs: {Name:mk9e164f84dd227cf84a459eec91beae2bb75a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:31.937207   10530 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key
	I0926 22:29:32.026590   10530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt ...
	I0926 22:29:32.026617   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt: {Name:mk1e3bf23e32e449f89f22a09284a0006a99cefd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.026782   10530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key ...
	I0926 22:29:32.026793   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key: {Name:mk5eaff0d17e330d6fd7ef6fcf7ad742525bef9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.026899   10530 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key
	I0926 22:29:32.787420   10530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.crt ...
	I0926 22:29:32.787450   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.crt: {Name:mk6c2cf5ab5d6decc42b76574fbbb2fa2a0d74f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.787609   10530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key ...
	I0926 22:29:32.787622   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key: {Name:mkbbce150377f831f3bce3eb30a4bb3f0e3a8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.787695   10530 certs.go:257] generating profile certs ...
	I0926 22:29:32.787750   10530 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.key
	I0926 22:29:32.787764   10530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt with IP's: []
	I0926 22:29:32.908998   10530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt ...
	I0926 22:29:32.909041   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: {Name:mk6078e9e1b406565a2c72ced7e3ab3a671f1de7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.909244   10530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.key ...
	I0926 22:29:32.909261   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.key: {Name:mkf3b0b0d969697c37ccf2b79cfe2d489e612622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:32.909377   10530 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.key.bda1d0ab
	I0926 22:29:32.909405   10530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.crt.bda1d0ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.36]
	I0926 22:29:33.576258   10530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.crt.bda1d0ab ...
	I0926 22:29:33.576288   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.crt.bda1d0ab: {Name:mk70a5fec9ce790e76bea656ec7f721eddde8def Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:33.576479   10530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.key.bda1d0ab ...
	I0926 22:29:33.576497   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.key.bda1d0ab: {Name:mkfc811bca2f58c6255301ef1bf7f7fc92f29309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:33.576622   10530 certs.go:382] copying /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.crt.bda1d0ab -> /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.crt
	I0926 22:29:33.576725   10530 certs.go:386] copying /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.key.bda1d0ab -> /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.key
	I0926 22:29:33.576779   10530 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.key
	I0926 22:29:33.576798   10530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.crt with IP's: []
	I0926 22:29:33.714042   10530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.crt ...
	I0926 22:29:33.714078   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.crt: {Name:mk2e196363dd00f5cf367b53bb1262ff8b58660e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:33.714261   10530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.key ...
	I0926 22:29:33.714278   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.key: {Name:mk6fa7164da45c401e6803ce35af819baa1796ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:33.714526   10530 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem (1679 bytes)
	I0926 22:29:33.714563   10530 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem (1082 bytes)
	I0926 22:29:33.714590   10530 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem (1123 bytes)
	I0926 22:29:33.714617   10530 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem (1675 bytes)
	I0926 22:29:33.715164   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 22:29:33.757024   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0926 22:29:33.801115   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 22:29:33.836953   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 22:29:33.869906   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0926 22:29:33.902538   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 22:29:33.933981   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 22:29:33.969510   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0926 22:29:34.000543   10530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 22:29:34.033373   10530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 22:29:34.056131   10530 ssh_runner.go:195] Run: openssl version
	I0926 22:29:34.062810   10530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 22:29:34.076566   10530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:34.082039   10530 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:34.082103   10530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:34.090282   10530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 22:29:34.104577   10530 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 22:29:34.110236   10530 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 22:29:34.110292   10530 kubeadm.go:400] StartCluster: {Name:addons-330674 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-330674
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:29:34.110386   10530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0926 22:29:34.110460   10530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0926 22:29:34.153972   10530 cri.go:89] found id: ""
	I0926 22:29:34.154038   10530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 22:29:34.166665   10530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 22:29:34.179555   10530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 22:29:34.192252   10530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 22:29:34.192272   10530 kubeadm.go:157] found existing configuration files:
	
	I0926 22:29:34.192315   10530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 22:29:34.204361   10530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 22:29:34.204419   10530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 22:29:34.216783   10530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 22:29:34.228359   10530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 22:29:34.228420   10530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 22:29:34.241418   10530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 22:29:34.253479   10530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 22:29:34.253551   10530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 22:29:34.266101   10530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 22:29:34.278300   10530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 22:29:34.278381   10530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 22:29:34.291142   10530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0926 22:29:34.464024   10530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 22:29:47.445637   10530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.0
	I0926 22:29:47.445747   10530 kubeadm.go:318] [preflight] Running pre-flight checks
	I0926 22:29:47.445868   10530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 22:29:47.445976   10530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 22:29:47.446109   10530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0926 22:29:47.446209   10530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 22:29:47.447948   10530 out.go:252]   - Generating certificates and keys ...
	I0926 22:29:47.448061   10530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I0926 22:29:47.448147   10530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I0926 22:29:47.448269   10530 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0926 22:29:47.448325   10530 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I0926 22:29:47.448386   10530 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I0926 22:29:47.448429   10530 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I0926 22:29:47.448504   10530 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I0926 22:29:47.448610   10530 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-330674 localhost] and IPs [192.168.39.36 127.0.0.1 ::1]
	I0926 22:29:47.448701   10530 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I0926 22:29:47.448884   10530 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-330674 localhost] and IPs [192.168.39.36 127.0.0.1 ::1]
	I0926 22:29:47.448982   10530 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0926 22:29:47.449075   10530 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I0926 22:29:47.449133   10530 kubeadm.go:318] [certs] Generating "sa" key and public key
	I0926 22:29:47.449183   10530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 22:29:47.449259   10530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 22:29:47.449346   10530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 22:29:47.449422   10530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 22:29:47.449517   10530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 22:29:47.449600   10530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 22:29:47.449705   10530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 22:29:47.449800   10530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 22:29:47.451527   10530 out.go:252]   - Booting up control plane ...
	I0926 22:29:47.451640   10530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 22:29:47.451715   10530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 22:29:47.451812   10530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 22:29:47.451951   10530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 22:29:47.452083   10530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0926 22:29:47.452213   10530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0926 22:29:47.452327   10530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 22:29:47.452402   10530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I0926 22:29:47.452577   10530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0926 22:29:47.452679   10530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0926 22:29:47.452730   10530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001754359s
	I0926 22:29:47.452819   10530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0926 22:29:47.452954   10530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.36:8443/livez
	I0926 22:29:47.453080   10530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0926 22:29:47.453186   10530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0926 22:29:47.453298   10530 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.979294458s
	I0926 22:29:47.453372   10530 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.933266488s
	I0926 22:29:47.453434   10530 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.002163771s
	I0926 22:29:47.453584   10530 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 22:29:47.453730   10530 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 22:29:47.453820   10530 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 22:29:47.454057   10530 kubeadm.go:318] [mark-control-plane] Marking the node addons-330674 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 22:29:47.454109   10530 kubeadm.go:318] [bootstrap-token] Using token: fhdqe8.jaemq9w7cxwr09ny
	I0926 22:29:47.456600   10530 out.go:252]   - Configuring RBAC rules ...
	I0926 22:29:47.456703   10530 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 22:29:47.456774   10530 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 22:29:47.456924   10530 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 22:29:47.457204   10530 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 22:29:47.457400   10530 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 22:29:47.457529   10530 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 22:29:47.457694   10530 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 22:29:47.457760   10530 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I0926 22:29:47.457852   10530 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I0926 22:29:47.457878   10530 kubeadm.go:318] 
	I0926 22:29:47.457966   10530 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I0926 22:29:47.457989   10530 kubeadm.go:318] 
	I0926 22:29:47.458096   10530 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I0926 22:29:47.458110   10530 kubeadm.go:318] 
	I0926 22:29:47.458158   10530 kubeadm.go:318]   mkdir -p $HOME/.kube
	I0926 22:29:47.458244   10530 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 22:29:47.458315   10530 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 22:29:47.458324   10530 kubeadm.go:318] 
	I0926 22:29:47.458397   10530 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I0926 22:29:47.458406   10530 kubeadm.go:318] 
	I0926 22:29:47.458474   10530 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 22:29:47.458483   10530 kubeadm.go:318] 
	I0926 22:29:47.458552   10530 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I0926 22:29:47.458681   10530 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 22:29:47.458813   10530 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 22:29:47.458841   10530 kubeadm.go:318] 
	I0926 22:29:47.458968   10530 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 22:29:47.459081   10530 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I0926 22:29:47.459092   10530 kubeadm.go:318] 
	I0926 22:29:47.459200   10530 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token fhdqe8.jaemq9w7cxwr09ny \
	I0926 22:29:47.459342   10530 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1bc065dc0287f5108511f75d77232285046ef3d632aca3b6b4eb77abcecaa58 \
	I0926 22:29:47.459397   10530 kubeadm.go:318] 	--control-plane 
	I0926 22:29:47.459414   10530 kubeadm.go:318] 
	I0926 22:29:47.459557   10530 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I0926 22:29:47.459575   10530 kubeadm.go:318] 
	I0926 22:29:47.459704   10530 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token fhdqe8.jaemq9w7cxwr09ny \
	I0926 22:29:47.459860   10530 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1bc065dc0287f5108511f75d77232285046ef3d632aca3b6b4eb77abcecaa58 
	I0926 22:29:47.459875   10530 cni.go:84] Creating CNI manager for ""
	I0926 22:29:47.459885   10530 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 22:29:47.462286   10530 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0926 22:29:47.463479   10530 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 22:29:47.480090   10530 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 22:29:47.505223   10530 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 22:29:47.505369   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:47.505369   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-330674 minikube.k8s.io/updated_at=2025_09_26T22_29_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47 minikube.k8s.io/name=addons-330674 minikube.k8s.io/primary=true
	I0926 22:29:47.547348   10530 ops.go:34] apiserver oom_adj: -16
	I0926 22:29:47.696459   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:48.197390   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:48.697112   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:49.197409   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:49.697305   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:50.196725   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:50.697377   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:51.197169   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:51.696547   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:52.197238   10530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:29:52.337983   10530 kubeadm.go:1113] duration metric: took 4.832674675s to wait for elevateKubeSystemPrivileges
	I0926 22:29:52.338028   10530 kubeadm.go:402] duration metric: took 18.227740002s to StartCluster
	I0926 22:29:52.338055   10530 settings.go:142] acquiring lock: {Name:mk8a46d5a99d51096f5a73696c8b5f570ce357f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:52.338211   10530 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 22:29:52.338922   10530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/kubeconfig: {Name:mkc92bf76d8ba21d0a2b0bb28107401b61549063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:52.339193   10530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0926 22:29:52.339222   10530 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 22:29:52.339287   10530 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0926 22:29:52.339397   10530 addons.go:69] Setting yakd=true in profile "addons-330674"
	I0926 22:29:52.339422   10530 addons.go:238] Setting addon yakd=true in "addons-330674"
	I0926 22:29:52.339438   10530 config.go:182] Loaded profile config "addons-330674": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:29:52.339450   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.339442   10530 addons.go:69] Setting inspektor-gadget=true in profile "addons-330674"
	I0926 22:29:52.339484   10530 addons.go:238] Setting addon inspektor-gadget=true in "addons-330674"
	I0926 22:29:52.339489   10530 addons.go:69] Setting registry-creds=true in profile "addons-330674"
	I0926 22:29:52.339500   10530 addons.go:238] Setting addon registry-creds=true in "addons-330674"
	I0926 22:29:52.339517   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.339530   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.339560   10530 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-330674"
	I0926 22:29:52.339588   10530 addons.go:69] Setting default-storageclass=true in profile "addons-330674"
	I0926 22:29:52.339641   10530 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-330674"
	I0926 22:29:52.339699   10530 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-330674"
	I0926 22:29:52.339712   10530 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-330674"
	I0926 22:29:52.339718   10530 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-330674"
	I0926 22:29:52.339748   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.339759   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.339933   10530 addons.go:69] Setting registry=true in profile "addons-330674"
	I0926 22:29:52.339940   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.339946   10530 addons.go:238] Setting addon registry=true in "addons-330674"
	I0926 22:29:52.339964   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.339980   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340110   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340158   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340197   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340205   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340206   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340225   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340231   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340240   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340291   10530 addons.go:69] Setting metrics-server=true in profile "addons-330674"
	I0926 22:29:52.340304   10530 addons.go:238] Setting addon metrics-server=true in "addons-330674"
	I0926 22:29:52.340326   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.340349   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340374   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340392   10530 addons.go:69] Setting cloud-spanner=true in profile "addons-330674"
	I0926 22:29:52.340443   10530 addons.go:238] Setting addon cloud-spanner=true in "addons-330674"
	I0926 22:29:52.340560   10530 addons.go:69] Setting volcano=true in profile "addons-330674"
	I0926 22:29:52.340574   10530 addons.go:238] Setting addon volcano=true in "addons-330674"
	I0926 22:29:52.340604   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.340716   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340742   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340788   10530 addons.go:69] Setting volumesnapshots=true in profile "addons-330674"
	I0926 22:29:52.340800   10530 addons.go:238] Setting addon volumesnapshots=true in "addons-330674"
	I0926 22:29:52.340924   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.340944   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.340986   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.341014   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.341238   10530 addons.go:69] Setting ingress=true in profile "addons-330674"
	I0926 22:29:52.341253   10530 addons.go:69] Setting storage-provisioner=true in profile "addons-330674"
	I0926 22:29:52.341266   10530 addons.go:238] Setting addon storage-provisioner=true in "addons-330674"
	I0926 22:29:52.341300   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.341348   10530 addons.go:238] Setting addon ingress=true in "addons-330674"
	I0926 22:29:52.341240   10530 addons.go:69] Setting ingress-dns=true in profile "addons-330674"
	I0926 22:29:52.341383   10530 addons.go:238] Setting addon ingress-dns=true in "addons-330674"
	I0926 22:29:52.341398   10530 addons.go:69] Setting gcp-auth=true in profile "addons-330674"
	I0926 22:29:52.341402   10530 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-330674"
	I0926 22:29:52.341416   10530 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-330674"
	I0926 22:29:52.341435   10530 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-330674"
	I0926 22:29:52.341449   10530 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-330674"
	I0926 22:29:52.341509   10530 mustload.go:65] Loading cluster: addons-330674
	I0926 22:29:52.341572   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.341666   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.342053   10530 config.go:182] Loaded profile config "addons-330674": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:29:52.342088   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.342112   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.342422   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.342457   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.342651   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.342763   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.342850   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.342877   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.343172   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.343225   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.343690   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.343759   10530 out.go:179] * Verifying Kubernetes components...
	I0926 22:29:52.345141   10530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:52.350321   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.350372   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.350322   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.350435   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.351572   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.351633   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.358613   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.358684   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.361429   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40113
	I0926 22:29:52.362310   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.363266   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.363291   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.363782   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.364414   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.364455   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.371191   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39567
	I0926 22:29:52.371861   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.372561   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.372652   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.375030   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.375692   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.375748   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.375980   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I0926 22:29:52.377892   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44149
	I0926 22:29:52.378610   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.379228   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.379277   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.380418   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.380730   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35473
	I0926 22:29:52.381210   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.381428   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.382129   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.382712   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.382732   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.383155   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.383734   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.383880   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.385421   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.386039   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.386056   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.386744   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.392554   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.392631   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.392957   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36869
	I0926 22:29:52.403136   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39041
	I0926 22:29:52.403357   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.404253   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.404397   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.404815   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.405017   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.406177   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37295
	I0926 22:29:52.407153   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.407267   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42091
	I0926 22:29:52.407493   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.408091   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.408111   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.408550   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.408710   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.408724   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.409166   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.409846   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.409891   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.410616   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.410655   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.410905   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45421
	I0926 22:29:52.411003   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37765
	I0926 22:29:52.411804   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.411878   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.413166   10530 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-330674"
	I0926 22:29:52.413212   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.413277   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.413290   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.413308   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.413357   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.413386   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40071
	I0926 22:29:52.413618   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.413655   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.413774   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.413937   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.413999   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.413926   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.414268   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46413
	I0926 22:29:52.414417   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.415084   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.415098   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.415167   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.415401   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0926 22:29:52.416240   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.416691   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.416706   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.416995   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.417284   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.417356   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.417400   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.417684   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.417702   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.417745   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.417759   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.417859   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.418249   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.418537   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.418587   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.418801   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.418846   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.418861   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.419363   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.419645   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.423740   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.424043   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43789
	I0926 22:29:52.424090   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.424576   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35581
	I0926 22:29:52.426334   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.426454   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.426607   10530 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0926 22:29:52.427471   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.427488   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.427592   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
	I0926 22:29:52.427968   10530 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0926 22:29:52.427972   10530 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0926 22:29:52.428007   10530 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0926 22:29:52.428043   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.428128   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.428207   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.428668   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.428709   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.429176   10530 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0926 22:29:52.429193   10530 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0926 22:29:52.429212   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.429849   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.430116   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.430384   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.430434   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.430456   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.430515   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.430528   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.430743   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.430835   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40085
	I0926 22:29:52.431092   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.432143   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.432185   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.432703   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42425
	I0926 22:29:52.433465   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.433715   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.434668   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.434685   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.434904   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.434924   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.435446   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.435463   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.435495   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39103
	I0926 22:29:52.435973   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.435917   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.437085   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.437270   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.437297   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.437337   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.437502   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.437868   10530 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0926 22:29:52.438175   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.438188   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.438549   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45599
	I0926 22:29:52.438682   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.440037   10530 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0926 22:29:52.440061   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0926 22:29:52.440079   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.442435   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.442474   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.442439   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.442677   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.443200   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.443252   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.444657   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.444806   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.446914   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.447017   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.447041   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.447190   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.447362   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.447543   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.447999   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.451596   10530 addons.go:238] Setting addon default-storageclass=true in "addons-330674"
	I0926 22:29:52.451644   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:29:52.452021   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.452143   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.452216   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I0926 22:29:52.452540   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.452557   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.454847   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.454885   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.454917   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.454957   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.455075   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.455146   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.458000   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.458096   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.458285   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.458304   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.458720   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.458993   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.459085   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.459239   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.463711   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.464398   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.465791   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36909
	I0926 22:29:52.466371   10530 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0926 22:29:52.466644   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0926 22:29:52.467050   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.467569   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0926 22:29:52.467743   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36549
	I0926 22:29:52.468044   10530 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0926 22:29:52.468068   10530 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0926 22:29:52.468090   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.468774   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.468790   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.469218   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.470226   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.470297   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0926 22:29:52.470392   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I0926 22:29:52.470551   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.471071   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.471372   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.472449   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.472652   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.472891   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.473131   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0926 22:29:52.473349   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.473363   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.473882   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.473998   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33037
	I0926 22:29:52.474193   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.474752   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.475909   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0926 22:29:52.477113   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.477095   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.477161   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.477185   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.478618   10530 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0926 22:29:52.480579   10530 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0926 22:29:52.480597   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0926 22:29:52.480661   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.480818   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.480951   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0926 22:29:52.481512   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34583
	I0926 22:29:52.481756   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.481991   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.482171   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.482530   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33609
	I0926 22:29:52.482811   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.483144   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I0926 22:29:52.483423   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.483436   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.483520   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.483963   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.484104   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.484127   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.484599   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.484633   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.484662   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.486038   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.486072   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I0926 22:29:52.486043   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.486184   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.486663   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.486711   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.486942   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.487245   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.487310   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.487460   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.487535   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.487597   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.487535   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.487638   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.487646   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.487661   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.487848   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.487893   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.488103   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.488104   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.488177   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.488361   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.488389   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0926 22:29:52.488389   10530 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0926 22:29:52.489656   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.490162   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.490179   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.490193   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40285
	I0926 22:29:52.490687   10530 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0926 22:29:52.490706   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0926 22:29:52.491492   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.491504   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.491505   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0926 22:29:52.491619   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.491782   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.492610   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.493077   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.493208   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.493847   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.494199   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.494517   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.494954   10530 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0926 22:29:52.495130   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0926 22:29:52.495634   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.496189   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.496270   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.496863   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.496884   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.497219   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.497702   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0926 22:29:52.497708   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.497731   10530 out.go:179]   - Using image docker.io/registry:3.0.0
	I0926 22:29:52.497749   10530 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0926 22:29:52.498358   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.498430   10530 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:29:52.498469   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.497975   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.498604   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:52.499305   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:52.498668   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32851
	I0926 22:29:52.499351   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.499605   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.499952   10530 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0926 22:29:52.499987   10530 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0926 22:29:52.500004   10530 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0926 22:29:52.500007   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.500012   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0926 22:29:52.500023   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.500044   10530 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0926 22:29:52.500073   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:52.499594   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:52.500501   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:52.500514   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:52.500523   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:52.500129   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.500763   10530 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0926 22:29:52.501281   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:52.501333   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:52.501341   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:52.501398   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.501414   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.501434   10530 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 22:29:52.501736   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.501463   10530 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	W0926 22:29:52.501543   10530 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0926 22:29:52.502042   10530 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0926 22:29:52.502057   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0926 22:29:52.502060   10530 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:29:52.502073   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.502156   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0926 22:29:52.502598   10530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0926 22:29:52.502619   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.502406   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.502678   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.502927   10530 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 22:29:52.502977   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 22:29:52.503014   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.503203   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.503387   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.503874   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.504066   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.504321   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.504477   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:29:52.504534   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:29:52.505083   10530 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0926 22:29:52.505131   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0926 22:29:52.505159   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.505214   10530 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0926 22:29:52.505228   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0926 22:29:52.505243   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.510312   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.510352   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.510372   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.510540   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40237
	I0926 22:29:52.511814   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.512165   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.512498   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.512694   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.512861   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.512883   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.512940   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.513138   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.513164   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.513463   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.513728   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.513760   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.513773   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.513797   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.513819   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.513862   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.514034   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.514073   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.514293   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.514309   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.514314   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.514335   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.514397   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.514417   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.514466   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.514499   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.514543   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.514683   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.514718   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.514740   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.514803   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.514842   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.515096   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.515120   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.515158   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.515212   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.515314   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.515313   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.515369   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.515519   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.515528   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.515669   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.515801   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.515861   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.516001   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.516811   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.516866   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.517166   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.517319   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.517454   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.517568   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.518367   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.520659   10530 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0926 22:29:52.522403   10530 out.go:179]   - Using image docker.io/busybox:stable
	I0926 22:29:52.523881   10530 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0926 22:29:52.523952   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0926 22:29:52.523993   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.527749   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.528245   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.528291   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.528445   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.528629   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.528784   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.528962   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:29:52.529317   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41999
	I0926 22:29:52.529881   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:29:52.530387   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:29:52.530406   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:29:52.530792   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:29:52.530983   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:29:52.532944   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:29:52.533139   10530 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 22:29:52.533157   10530 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 22:29:52.533177   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:52.536538   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.537055   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:52.537081   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:52.537257   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:52.537421   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:52.537573   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:52.537707   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	W0926 22:29:52.825147   10530 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:42872->192.168.39.36:22: read: connection reset by peer
	I0926 22:29:52.825184   10530 retry.go:31] will retry after 357.513028ms: ssh: handshake failed: read tcp 192.168.39.1:42872->192.168.39.36:22: read: connection reset by peer
	I0926 22:29:53.505584   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0926 22:29:53.562710   10530 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:53.562734   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0926 22:29:53.582172   10530 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0926 22:29:53.582193   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0926 22:29:53.609139   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0926 22:29:53.610191   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0926 22:29:53.632195   10530 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0926 22:29:53.632223   10530 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0926 22:29:53.685673   10530 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0926 22:29:53.685699   10530 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0926 22:29:53.786927   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0926 22:29:53.923343   10530 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.584115291s)
	I0926 22:29:53.923377   10530 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.578070431s)
	I0926 22:29:53.923465   10530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 22:29:53.923530   10530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0926 22:29:53.925682   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0926 22:29:53.978909   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0926 22:29:53.978947   10530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0926 22:29:54.056339   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 22:29:54.057576   10530 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0926 22:29:54.057598   10530 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0926 22:29:54.073051   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 22:29:54.083110   10530 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0926 22:29:54.083142   10530 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0926 22:29:54.130144   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0926 22:29:54.233603   10530 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0926 22:29:54.233626   10530 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0926 22:29:54.338703   10530 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0926 22:29:54.338734   10530 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0926 22:29:54.348111   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:29:54.401871   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0926 22:29:54.427136   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0926 22:29:54.427192   10530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0926 22:29:54.477977   10530 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0926 22:29:54.478003   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0926 22:29:54.488580   10530 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0926 22:29:54.488611   10530 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0926 22:29:54.603741   10530 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0926 22:29:54.603771   10530 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0926 22:29:54.621914   10530 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0926 22:29:54.621950   10530 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0926 22:29:54.727962   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0926 22:29:54.727996   10530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0926 22:29:54.784061   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0926 22:29:54.798515   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0926 22:29:54.798543   10530 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0926 22:29:54.906797   10530 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0926 22:29:54.906820   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0926 22:29:54.981155   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0926 22:29:55.311775   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0926 22:29:55.311837   10530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0926 22:29:55.397530   10530 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:29:55.397562   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0926 22:29:55.499650   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0926 22:29:55.662109   10530 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0926 22:29:55.662147   10530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0926 22:29:55.768710   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.263082445s)
	I0926 22:29:55.768768   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:55.768794   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:55.769138   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:55.769186   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:55.769194   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:55.769212   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:55.769221   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:55.769505   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:55.769523   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:55.769540   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:55.841693   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:29:56.282914   10530 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0926 22:29:56.282938   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0926 22:29:56.489532   10530 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0926 22:29:56.489560   10530 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0926 22:29:56.788009   10530 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0926 22:29:56.788039   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0926 22:29:57.398847   10530 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0926 22:29:57.398877   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0926 22:29:57.915310   10530 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0926 22:29:57.915334   10530 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0926 22:29:58.078605   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0926 22:29:58.892510   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.283337619s)
	I0926 22:29:58.892546   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.282325805s)
	I0926 22:29:58.892563   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.892578   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.892593   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.892605   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.892599   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.105645313s)
	I0926 22:29:58.892637   10530 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.969155254s)
	I0926 22:29:58.892657   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.892668   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.892696   10530 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.969142392s)
	I0926 22:29:58.892721   10530 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0926 22:29:58.892729   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.967021186s)
	I0926 22:29:58.892748   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.892757   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.892954   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.892998   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.893008   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.893017   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.893024   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.893125   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.893140   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.893142   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.893148   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.893155   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.893215   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.893234   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.893240   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.893247   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.893253   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.893298   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.893304   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.893311   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:58.893317   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:58.893610   10530 node_ready.go:35] waiting up to 6m0s for node "addons-330674" to be "Ready" ...
	I0926 22:29:58.895454   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.895477   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.895485   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.895492   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.895493   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.895517   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.895539   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.895560   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.895521   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.895572   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:58.895596   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:58.895613   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:58.929667   10530 node_ready.go:49] node "addons-330674" is "Ready"
	I0926 22:29:58.929709   10530 node_ready.go:38] duration metric: took 36.077495ms for node "addons-330674" to be "Ready" ...
	I0926 22:29:58.929729   10530 api_server.go:52] waiting for apiserver process to appear ...
	I0926 22:29:58.929805   10530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 22:29:59.313914   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.257543853s)
	I0926 22:29:59.313962   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.313971   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.313914   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.240823229s)
	I0926 22:29:59.314034   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.314042   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.314249   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.314274   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.314286   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.314294   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.314315   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:59.314350   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.314358   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.314365   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.314372   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.314617   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:59.314651   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.314659   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.314968   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:59.315002   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.315029   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.577160   10530 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-330674" context rescaled to 1 replicas
	I0926 22:29:59.623147   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.623171   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.623479   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:59.623536   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.623556   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.827523   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.697342297s)
	I0926 22:29:59.827568   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.827580   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.827837   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:29:59.827864   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.827879   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.827899   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:29:59.827910   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:29:59.828169   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:29:59.828186   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:29:59.961517   10530 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0926 22:29:59.961559   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:29:59.965295   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:59.965808   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:29:59.965856   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:29:59.966131   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:29:59.966338   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:29:59.966510   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:29:59.966670   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:30:00.039995   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:00.040024   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:00.040328   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:00.040346   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:00.125106   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.776957771s)
	W0926 22:30:00.125185   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:00.125204   10530 retry.go:31] will retry after 258.780744ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:00.324361   10530 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0926 22:30:00.385019   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:00.672566   10530 addons.go:238] Setting addon gcp-auth=true in "addons-330674"
	I0926 22:30:00.672636   10530 host.go:66] Checking if "addons-330674" exists ...
	I0926 22:30:00.673096   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:30:00.673137   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:30:00.687087   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
	I0926 22:30:00.687645   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:30:00.688187   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:30:00.688212   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:30:00.688516   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:30:00.689029   10530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:30:00.689057   10530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:30:00.702335   10530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36661
	I0926 22:30:00.702789   10530 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:30:00.703222   10530 main.go:141] libmachine: Using API Version  1
	I0926 22:30:00.703244   10530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:30:00.703562   10530 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:30:00.703802   10530 main.go:141] libmachine: (addons-330674) Calling .GetState
	I0926 22:30:00.705815   10530 main.go:141] libmachine: (addons-330674) Calling .DriverName
	I0926 22:30:00.706084   10530 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0926 22:30:00.706107   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHHostname
	I0926 22:30:00.709280   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:30:00.709679   10530 main.go:141] libmachine: (addons-330674) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:3c:4a", ip: ""} in network mk-addons-330674: {Iface:virbr1 ExpiryTime:2025-09-26 23:29:24 +0000 UTC Type:0 Mac:52:54:00:fe:3c:4a Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:addons-330674 Clientid:01:52:54:00:fe:3c:4a}
	I0926 22:30:00.709711   10530 main.go:141] libmachine: (addons-330674) DBG | domain addons-330674 has defined IP address 192.168.39.36 and MAC address 52:54:00:fe:3c:4a in network mk-addons-330674
	I0926 22:30:00.709896   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHPort
	I0926 22:30:00.710096   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHKeyPath
	I0926 22:30:00.710284   10530 main.go:141] libmachine: (addons-330674) Calling .GetSSHUsername
	I0926 22:30:00.710443   10530 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/addons-330674/id_rsa Username:docker}
	I0926 22:30:02.404757   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.620648683s)
	I0926 22:30:02.404821   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.404859   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.404866   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.423678426s)
	I0926 22:30:02.404891   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.404914   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.404943   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.905250926s)
	I0926 22:30:02.404990   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.405013   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.405022   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.563299799s)
	W0926 22:30:02.405057   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0926 22:30:02.405082   10530 retry.go:31] will retry after 343.769978ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0926 22:30:02.405209   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.405221   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.405230   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.405237   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.405249   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.405258   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.405266   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.405272   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.405336   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:30:02.405337   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.405348   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.405356   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.405363   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.405530   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.405546   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.405653   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:30:02.405699   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.405707   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.405715   10530 addons.go:479] Verifying addon metrics-server=true in "addons-330674"
	I0926 22:30:02.405821   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:30:02.405866   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.405877   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.405883   10530 addons.go:479] Verifying addon registry=true in "addons-330674"
	I0926 22:30:02.407363   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.005456123s)
	I0926 22:30:02.407398   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.407407   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.407601   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.407618   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.407627   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:02.407635   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:02.407841   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:02.407857   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:02.407865   10530 addons.go:479] Verifying addon ingress=true in "addons-330674"
	I0926 22:30:02.408308   10530 out.go:179] * Verifying registry addon...
	I0926 22:30:02.408383   10530 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-330674 service yakd-dashboard -n yakd-dashboard
	
	I0926 22:30:02.409149   10530 out.go:179] * Verifying ingress addon...
	I0926 22:30:02.410466   10530 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0926 22:30:02.411443   10530 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0926 22:30:02.443532   10530 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0926 22:30:02.443553   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:02.450316   10530 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0926 22:30:02.450340   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:02.749736   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:30:02.955435   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:03.005505   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:03.290182   10530 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.360350718s)
	I0926 22:30:03.290221   10530 api_server.go:72] duration metric: took 10.950960949s to wait for apiserver process to appear ...
	I0926 22:30:03.290227   10530 api_server.go:88] waiting for apiserver healthz status ...
	I0926 22:30:03.290245   10530 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0926 22:30:03.291781   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.213122213s)
	I0926 22:30:03.291866   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:03.291892   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:03.292156   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:03.292173   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:03.292181   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:03.292189   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:03.292447   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:03.292465   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:03.292477   10530 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-330674"
	I0926 22:30:03.294391   10530 out.go:179] * Verifying csi-hostpath-driver addon...
	I0926 22:30:03.297053   10530 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0926 22:30:03.321731   10530 api_server.go:279] https://192.168.39.36:8443/healthz returned 200:
	ok
	I0926 22:30:03.330874   10530 api_server.go:141] control plane version: v1.34.0
	I0926 22:30:03.330909   10530 api_server.go:131] duration metric: took 40.674253ms to wait for apiserver health ...
	I0926 22:30:03.330920   10530 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 22:30:03.344023   10530 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0926 22:30:03.344056   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:03.403718   10530 system_pods.go:59] 20 kube-system pods found
	I0926 22:30:03.403767   10530 system_pods.go:61] "amd-gpu-device-plugin-cdb8s" [b42dc693-f8dc-488e-a6df-11603c5146c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:30:03.403775   10530 system_pods.go:61] "coredns-66bc5c9577-s7j79" [685dab00-8a34-4029-b32e-d39a08e61560] Running
	I0926 22:30:03.403782   10530 system_pods.go:61] "coredns-66bc5c9577-vcwdm" [6a3371fb-cab7-4a7e-8907-e11b45338ed0] Running
	I0926 22:30:03.403788   10530 system_pods.go:61] "csi-hostpath-attacher-0" [b261b610-5540-4a39-af53-0a988f5316a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0926 22:30:03.403793   10530 system_pods.go:61] "csi-hostpath-resizer-0" [cc7afc9a-219f-4080-9fba-b24d07fadc30] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0926 22:30:03.403801   10530 system_pods.go:61] "csi-hostpathplugin-mk92b" [98d7012b-de84-42ba-8ec1-3e1578c28cfd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0926 22:30:03.403805   10530 system_pods.go:61] "etcd-addons-330674" [1ada4ec6-135f-43be-bb60-af64ae2a0259] Running
	I0926 22:30:03.403809   10530 system_pods.go:61] "kube-apiserver-addons-330674" [85dd874b-a8d2-4a72-be1b-d09107cf46d1] Running
	I0926 22:30:03.403814   10530 system_pods.go:61] "kube-controller-manager-addons-330674" [e8c1d449-4682-421a-ac32-8cd0847bf13d] Running
	I0926 22:30:03.403839   10530 system_pods.go:61] "kube-ingress-dns-minikube" [d20fd4fa-1f62-423e-a836-f66893f73949] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0926 22:30:03.403855   10530 system_pods.go:61] "kube-proxy-lldr6" [e3500915-4e56-473c-8674-5ea502daaac6] Running
	I0926 22:30:03.403861   10530 system_pods.go:61] "kube-scheduler-addons-330674" [6f79c673-6fec-4e6d-a974-50991d63a4a3] Running
	I0926 22:30:03.403868   10530 system_pods.go:61] "metrics-server-85b7d694d7-lwlpp" [2b5d3bcf-5ffd-48cc-a6b5-c5c418e1348e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0926 22:30:03.403877   10530 system_pods.go:61] "nvidia-device-plugin-daemonset-8pbfv" [1929f235-8f94-4b86-ba34-fcdb88f8378b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:30:03.403885   10530 system_pods.go:61] "registry-66898fdd98-2t8mg" [c1b89f10-d5b6-445e-b282-034ab8eaa0ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0926 22:30:03.403892   10530 system_pods.go:61] "registry-creds-764b6fb674-hjbpz" [5f2c62bb-e38c-4e78-a9aa-995812c7d2ef] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0926 22:30:03.403899   10530 system_pods.go:61] "registry-proxy-2jz4s" [ad4c665f-afe2-4a63-95bb-447d8efe7a88] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 22:30:03.403905   10530 system_pods.go:61] "snapshot-controller-7d9fbc56b8-btkpl" [d9d7b772-8f8e-4095-aaa6-fc9b1d68c681] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:03.403911   10530 system_pods.go:61] "snapshot-controller-7d9fbc56b8-n4kkw" [86602a14-6de0-44fe-99ba-f64d79426345] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:03.403923   10530 system_pods.go:61] "storage-provisioner" [805513c7-5529-4f0e-bbe6-de0e474ba2ba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 22:30:03.403929   10530 system_pods.go:74] duration metric: took 73.003109ms to wait for pod list to return data ...
	I0926 22:30:03.403938   10530 default_sa.go:34] waiting for default service account to be created ...
	I0926 22:30:03.416293   10530 default_sa.go:45] found service account: "default"
	I0926 22:30:03.416322   10530 default_sa.go:55] duration metric: took 12.37763ms for default service account to be created ...
	I0926 22:30:03.416335   10530 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 22:30:03.420408   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:03.420640   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:03.435848   10530 system_pods.go:86] 20 kube-system pods found
	I0926 22:30:03.435885   10530 system_pods.go:89] "amd-gpu-device-plugin-cdb8s" [b42dc693-f8dc-488e-a6df-11603c5146c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:30:03.435896   10530 system_pods.go:89] "coredns-66bc5c9577-s7j79" [685dab00-8a34-4029-b32e-d39a08e61560] Running
	I0926 22:30:03.435903   10530 system_pods.go:89] "coredns-66bc5c9577-vcwdm" [6a3371fb-cab7-4a7e-8907-e11b45338ed0] Running
	I0926 22:30:03.435909   10530 system_pods.go:89] "csi-hostpath-attacher-0" [b261b610-5540-4a39-af53-0a988f5316a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0926 22:30:03.435920   10530 system_pods.go:89] "csi-hostpath-resizer-0" [cc7afc9a-219f-4080-9fba-b24d07fadc30] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0926 22:30:03.435926   10530 system_pods.go:89] "csi-hostpathplugin-mk92b" [98d7012b-de84-42ba-8ec1-3e1578c28cfd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0926 22:30:03.435933   10530 system_pods.go:89] "etcd-addons-330674" [1ada4ec6-135f-43be-bb60-af64ae2a0259] Running
	I0926 22:30:03.435938   10530 system_pods.go:89] "kube-apiserver-addons-330674" [85dd874b-a8d2-4a72-be1b-d09107cf46d1] Running
	I0926 22:30:03.435943   10530 system_pods.go:89] "kube-controller-manager-addons-330674" [e8c1d449-4682-421a-ac32-8cd0847bf13d] Running
	I0926 22:30:03.435948   10530 system_pods.go:89] "kube-ingress-dns-minikube" [d20fd4fa-1f62-423e-a836-f66893f73949] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0926 22:30:03.435961   10530 system_pods.go:89] "kube-proxy-lldr6" [e3500915-4e56-473c-8674-5ea502daaac6] Running
	I0926 22:30:03.435968   10530 system_pods.go:89] "kube-scheduler-addons-330674" [6f79c673-6fec-4e6d-a974-50991d63a4a3] Running
	I0926 22:30:03.435973   10530 system_pods.go:89] "metrics-server-85b7d694d7-lwlpp" [2b5d3bcf-5ffd-48cc-a6b5-c5c418e1348e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0926 22:30:03.435983   10530 system_pods.go:89] "nvidia-device-plugin-daemonset-8pbfv" [1929f235-8f94-4b86-ba34-fcdb88f8378b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:30:03.435990   10530 system_pods.go:89] "registry-66898fdd98-2t8mg" [c1b89f10-d5b6-445e-b282-034ab8eaa0ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0926 22:30:03.435995   10530 system_pods.go:89] "registry-creds-764b6fb674-hjbpz" [5f2c62bb-e38c-4e78-a9aa-995812c7d2ef] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0926 22:30:03.436004   10530 system_pods.go:89] "registry-proxy-2jz4s" [ad4c665f-afe2-4a63-95bb-447d8efe7a88] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 22:30:03.436011   10530 system_pods.go:89] "snapshot-controller-7d9fbc56b8-btkpl" [d9d7b772-8f8e-4095-aaa6-fc9b1d68c681] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:03.436030   10530 system_pods.go:89] "snapshot-controller-7d9fbc56b8-n4kkw" [86602a14-6de0-44fe-99ba-f64d79426345] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:03.436040   10530 system_pods.go:89] "storage-provisioner" [805513c7-5529-4f0e-bbe6-de0e474ba2ba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 22:30:03.436051   10530 system_pods.go:126] duration metric: took 19.710312ms to wait for k8s-apps to be running ...
	I0926 22:30:03.436063   10530 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 22:30:03.436116   10530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 22:30:03.805385   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:03.933120   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:03.935740   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:04.103360   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.718280199s)
	W0926 22:30:04.103409   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:04.103441   10530 retry.go:31] will retry after 415.010612ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:04.103441   10530 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.397332098s)
	I0926 22:30:04.105638   10530 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:30:04.107144   10530 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0926 22:30:04.108740   10530 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0926 22:30:04.108757   10530 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0926 22:30:04.204504   10530 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0926 22:30:04.204558   10530 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0926 22:30:04.266226   10530 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0926 22:30:04.266270   10530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0926 22:30:04.318135   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:04.326300   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0926 22:30:04.425264   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:04.425430   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:04.519163   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:04.804743   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:04.918462   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:04.921343   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:05.305855   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:05.419096   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:05.420385   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:05.480378   10530 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.044238076s)
	I0926 22:30:05.480434   10530 system_svc.go:56] duration metric: took 2.044366858s WaitForService to wait for kubelet
	I0926 22:30:05.480445   10530 kubeadm.go:586] duration metric: took 13.141186204s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 22:30:05.480467   10530 node_conditions.go:102] verifying NodePressure condition ...
	I0926 22:30:05.480379   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.730593729s)
	I0926 22:30:05.480567   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:05.480587   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:05.480910   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:05.480930   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:05.480948   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:05.480958   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:05.481297   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:05.481319   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:05.481322   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:30:05.490128   10530 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 22:30:05.490159   10530 node_conditions.go:123] node cpu capacity is 2
	I0926 22:30:05.490173   10530 node_conditions.go:105] duration metric: took 9.698866ms to run NodePressure ...
	I0926 22:30:05.490188   10530 start.go:241] waiting for startup goroutines ...
	I0926 22:30:05.823251   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:05.995165   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:05.995238   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:06.168992   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.842648363s)
	I0926 22:30:06.169046   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:06.169088   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:06.169430   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:06.169452   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:06.169462   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:30:06.169470   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:30:06.169730   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:30:06.169745   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:30:06.169769   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:30:06.170927   10530 addons.go:479] Verifying addon gcp-auth=true in "addons-330674"
	I0926 22:30:06.172988   10530 out.go:179] * Verifying gcp-auth addon...
	I0926 22:30:06.174897   10530 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0926 22:30:06.212287   10530 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0926 22:30:06.212317   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:06.312659   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:06.419336   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:06.421545   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:06.682289   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:06.707555   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.188348588s)
	W0926 22:30:06.707615   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:06.707638   10530 retry.go:31] will retry after 690.015659ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:06.806300   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:06.928806   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:06.928935   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:07.182496   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:07.305123   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:07.398719   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:07.423608   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:07.424145   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:07.683323   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:07.805352   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:07.926676   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:07.926821   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:08.183118   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:08.305133   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:08.418514   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:08.420565   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:08.679221   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:08.802855   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:08.849509   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.450746787s)
	W0926 22:30:08.849558   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:08.849579   10530 retry.go:31] will retry after 720.875973ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:08.914397   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:08.916076   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:09.178734   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:09.301290   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:09.420684   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:09.421209   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:09.571363   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:09.684948   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:09.814626   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:09.920020   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:09.920521   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.184424   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:10.302867   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:10.415872   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.418972   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:10.681185   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:10.802134   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:10.816960   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.245551165s)
	W0926 22:30:10.817021   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:10.817043   10530 retry.go:31] will retry after 1.516018438s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:10.916672   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:10.920270   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.178990   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:11.306805   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:11.418242   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.419600   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:11.680889   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:11.804313   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:11.914838   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:11.918376   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:12.180561   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:12.301512   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:12.333663   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:12.415805   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:12.419363   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:12.682335   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:12.804222   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:12.918788   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:12.919995   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:13.180331   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:13.305340   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:13.415577   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:13.416349   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:13.683699   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:13.805707   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:13.813715   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.480003432s)
	W0926 22:30:13.813753   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:13.813774   10530 retry.go:31] will retry after 1.257586739s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:13.921625   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:13.925319   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:14.180615   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:14.305510   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:14.415983   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:14.416424   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:14.679635   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:14.807576   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:14.915558   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:14.917303   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:15.071517   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:15.181159   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:15.306945   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:15.418630   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:15.418800   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:15.679147   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:15.893712   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:15.916744   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:15.917096   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:16.185591   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:16.304040   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:16.326267   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.254707359s)
	W0926 22:30:16.326313   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:16.326336   10530 retry.go:31] will retry after 2.377890696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:16.416481   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:16.419518   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:16.681550   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:16.803052   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:16.918664   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:16.919009   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:17.182452   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:17.302075   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:17.413448   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:17.417362   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.047202   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:18.047385   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:18.047552   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.048184   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:18.179560   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:18.303903   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:18.418028   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.421419   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:18.680067   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:18.705254   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:18.801213   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:18.914739   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.917654   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:19.179344   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:19.303239   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:19.418321   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:19.418678   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:19.679164   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:19.806674   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:19.908858   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.203561998s)
	W0926 22:30:19.908904   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:19.908926   10530 retry.go:31] will retry after 4.32939773s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:19.917643   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:19.919920   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:20.581572   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:20.582550   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:20.583652   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:20.584766   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:20.679458   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:20.802582   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:20.916995   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:20.918666   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:21.180913   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:21.332135   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:21.417484   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:21.417798   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:21.679247   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:21.801601   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:21.921505   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:21.923595   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:22.206659   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:22.303078   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:22.415068   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:22.416432   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:22.682206   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:22.802352   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:22.916004   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:22.916426   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:23.178440   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:23.302488   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:23.416760   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:23.417074   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:23.678471   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:23.801463   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:23.914659   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:23.915754   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:24.183326   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:24.239507   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:24.305343   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:24.420822   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:24.422445   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:24.681588   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:24.803334   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:24.920591   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:24.921194   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.181354   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:25.300531   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:25.414416   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:25.415291   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.431734   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.19217319s)
	W0926 22:30:25.431806   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:25.431843   10530 retry.go:31] will retry after 4.927424107s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:25.679778   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:25.804725   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:25.917163   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.917189   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:26.181015   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:26.302673   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:26.415255   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:26.416011   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:26.932748   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:26.938776   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:26.939199   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:26.939659   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:27.179484   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:27.300382   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:27.413855   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:27.416495   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:27.679241   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:27.803067   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:27.915766   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:27.916504   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:28.179926   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:28.303820   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:28.417009   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:28.417362   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:28.680438   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:28.803693   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:28.913738   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:28.917580   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:29.183260   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:29.305035   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:29.415252   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:29.421557   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:29.681884   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:29.801694   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:29.917990   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:29.920375   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:30.183992   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:30.303403   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:30.359440   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:30.416736   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:30.418359   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:30.679889   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:30.802012   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:30.916345   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:30.916485   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:31.151193   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:31.151227   10530 retry.go:31] will retry after 11.763207551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:31.179522   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:31.300872   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:31.417428   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:31.421535   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:31.683158   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:31.804166   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:31.917250   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:31.919814   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:32.180485   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:32.301448   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:32.414799   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:32.416565   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:32.682199   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:32.802085   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:32.918254   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:32.920864   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:33.180283   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:33.302044   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:33.418195   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:33.420283   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:33.682205   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:33.802900   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:33.915518   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:33.917060   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:34.183894   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:34.302424   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:34.418071   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:34.418937   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:34.681883   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:34.802739   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:34.913927   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:34.918879   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:35.348473   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:35.348627   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:35.447966   10530 kapi.go:107] duration metric: took 33.037496042s to wait for kubernetes.io/minikube-addons=registry ...
	I0926 22:30:35.448199   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:35.683550   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:35.802457   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:35.919287   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:36.178520   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:36.307082   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:36.415664   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:36.678900   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:36.803136   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:36.917411   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:37.185045   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:37.305913   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:37.630651   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:37.685375   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:37.802798   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:37.916719   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:38.181102   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:38.303094   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:38.417302   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:38.678435   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:38.801995   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:38.915065   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:39.178903   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:39.304329   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:39.416763   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:39.680033   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:39.801768   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:39.920400   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:40.180647   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:40.304347   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:40.416722   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:40.680569   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:40.803376   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:40.917005   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:41.180461   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:41.304146   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:41.417255   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:41.886447   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:41.888300   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:41.917365   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:42.180186   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:42.301635   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:42.419758   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:42.684808   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:42.804001   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:42.915430   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:42.923040   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:43.179997   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:43.306383   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:43.417022   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:43.682482   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:43.804992   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:43.922647   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:44.178880   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:44.240115   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.324639979s)
	W0926 22:30:44.240173   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:44.240195   10530 retry.go:31] will retry after 8.858097577s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:44.303169   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:44.418771   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:44.679551   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:44.801684   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:44.916013   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:45.179885   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:45.304426   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:45.428618   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:45.683426   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:45.810100   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:45.925137   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:46.179160   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:46.304364   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:46.448027   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:46.680201   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:46.805269   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:46.918049   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:47.181812   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:47.303700   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:47.415733   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:47.678623   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:47.808820   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:47.924088   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:48.180112   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:48.303763   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:48.424961   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:48.683665   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:48.803327   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:48.916118   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:49.178848   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:49.307797   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:49.416656   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:49.678851   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:49.802681   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:49.915714   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:50.180965   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:50.302266   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:50.415480   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:50.678616   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:50.804349   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:50.915318   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:51.184191   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:51.304048   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:51.418336   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:51.681435   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:51.804006   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:51.920620   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:52.183727   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:52.302182   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:52.416612   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:52.680540   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:52.804272   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:52.916855   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:53.099065   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:53.180672   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:53.305123   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:53.420113   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:53.685179   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:53.804757   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:53.917568   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:54.182857   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:54.302373   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:54.363811   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.264695675s)
	W0926 22:30:54.363881   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:54.363905   10530 retry.go:31] will retry after 15.55536091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:54.417539   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:54.681049   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:54.805028   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:54.915452   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:55.179696   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:55.301978   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:55.415794   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:55.679572   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:55.819347   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:55.918310   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:56.198401   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:56.304413   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:56.419426   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:56.680091   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:56.801779   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:56.918752   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:57.179612   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:57.301230   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:57.417433   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:57.681559   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:57.804383   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:57.917958   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:58.184656   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:58.306258   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:58.417260   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:58.698392   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:58.807597   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:58.915960   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:59.185696   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:59.303096   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:59.416022   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:59.683432   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:59.802671   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:59.916001   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:00.181296   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:00.301887   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:00.427020   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:00.678513   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:00.801870   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:00.920491   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:01.185028   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:01.304169   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:01.418926   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:01.685221   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:01.802805   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:01.915852   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:02.180224   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:02.310447   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:02.417773   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:02.684271   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:02.802160   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:02.917181   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:03.179667   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:03.305578   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:03.421443   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:03.679070   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:03.801937   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:03.915703   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:04.183143   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:04.303032   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:04.416888   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:04.681175   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:04.804024   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:04.931508   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:05.179817   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:05.303489   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:05.417042   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:05.679451   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:05.802120   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:05.918159   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:06.182494   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:06.401415   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:06.422627   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:06.679776   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:06.809902   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:06.918997   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:07.181491   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:07.302724   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:07.420205   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:07.680745   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:07.802430   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:07.917742   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:08.180112   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:08.301417   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:08.419665   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:08.679714   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:08.804244   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:08.918524   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:09.179876   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:09.302541   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:09.416678   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:09.680295   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:09.803785   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:09.916555   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:09.919538   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:31:10.182518   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:10.302156   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:10.417516   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:10.681589   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:10.803589   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:10.918491   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:11.184181   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:11.304515   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:11.419292   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:11.446493   10530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.526922683s)
	W0926 22:31:11.446528   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:31:11.446544   10530 retry.go:31] will retry after 18.44611829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:31:11.678436   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:11.807747   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:11.919354   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:12.183063   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:12.311693   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:12.420067   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:12.680144   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:12.802750   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:12.915380   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:13.178429   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:13.304983   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:13.473623   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:13.681102   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:13.802854   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:13.917953   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:14.183739   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:14.306018   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:14.646952   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:14.685595   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:14.802999   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:14.921890   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:15.181084   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:15.302376   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:15.419849   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:15.683746   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:16.022493   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:16.022587   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:16.182478   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:16.302322   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:16.418598   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:16.679927   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:16.808355   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:16.925473   10530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:17.186059   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:17.302020   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:17.427294   10530 kapi.go:107] duration metric: took 1m15.015851492s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0926 22:31:17.679432   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:17.802560   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:18.182037   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:18.300453   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:18.682444   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:18.804335   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:19.183050   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:19.303647   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:19.682844   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:19.801755   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:20.180116   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:20.303024   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:20.683340   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:20.802598   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:21.185647   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:21.303560   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:21.682723   10530 kapi.go:107] duration metric: took 1m15.507819233s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0926 22:31:21.684569   10530 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-330674 cluster.
	I0926 22:31:21.685984   10530 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0926 22:31:21.687420   10530 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0926 22:31:21.803101   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:22.301291   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:22.802797   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:23.304046   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:23.801813   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:24.302450   10530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:24.802449   10530 kapi.go:107] duration metric: took 1m21.505395208s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0926 22:31:29.894273   10530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0926 22:31:30.655606   10530 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:31:30.655687   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:31:30.655705   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:31:30.655977   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:31:30.655997   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:31:30.656006   10530 main.go:141] libmachine: Making call to close driver server
	I0926 22:31:30.656013   10530 main.go:141] libmachine: (addons-330674) Calling .Close
	I0926 22:31:30.656033   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	I0926 22:31:30.656218   10530 main.go:141] libmachine: Successfully made call to close driver server
	I0926 22:31:30.656238   10530 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 22:31:30.656214   10530 main.go:141] libmachine: (addons-330674) DBG | Closing plugin on server side
	W0926 22:31:30.656316   10530 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0926 22:31:30.659216   10530 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, cloud-spanner, registry-creds, ingress-dns, storage-provisioner, default-storageclass, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0926 22:31:30.660657   10530 addons.go:514] duration metric: took 1m38.321386508s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin cloud-spanner registry-creds ingress-dns storage-provisioner default-storageclass storage-provisioner-rancher metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0926 22:31:30.660695   10530 start.go:246] waiting for cluster config update ...
	I0926 22:31:30.660716   10530 start.go:255] writing updated cluster config ...
	I0926 22:31:30.660982   10530 ssh_runner.go:195] Run: rm -f paused
	I0926 22:31:30.667682   10530 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 22:31:30.672263   10530 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vcwdm" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:30.678377   10530 pod_ready.go:94] pod "coredns-66bc5c9577-vcwdm" is "Ready"
	I0926 22:31:30.678398   10530 pod_ready.go:86] duration metric: took 6.113857ms for pod "coredns-66bc5c9577-vcwdm" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:30.681561   10530 pod_ready.go:83] waiting for pod "etcd-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:30.687574   10530 pod_ready.go:94] pod "etcd-addons-330674" is "Ready"
	I0926 22:31:30.687599   10530 pod_ready.go:86] duration metric: took 6.011516ms for pod "etcd-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:30.690685   10530 pod_ready.go:83] waiting for pod "kube-apiserver-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:30.695334   10530 pod_ready.go:94] pod "kube-apiserver-addons-330674" is "Ready"
	I0926 22:31:30.695353   10530 pod_ready.go:86] duration metric: took 4.646437ms for pod "kube-apiserver-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:30.697972   10530 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:31.073074   10530 pod_ready.go:94] pod "kube-controller-manager-addons-330674" is "Ready"
	I0926 22:31:31.073098   10530 pod_ready.go:86] duration metric: took 375.106541ms for pod "kube-controller-manager-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:31.272175   10530 pod_ready.go:83] waiting for pod "kube-proxy-lldr6" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:31.672837   10530 pod_ready.go:94] pod "kube-proxy-lldr6" is "Ready"
	I0926 22:31:31.672859   10530 pod_ready.go:86] duration metric: took 400.65065ms for pod "kube-proxy-lldr6" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:31.872942   10530 pod_ready.go:83] waiting for pod "kube-scheduler-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:32.272335   10530 pod_ready.go:94] pod "kube-scheduler-addons-330674" is "Ready"
	I0926 22:31:32.272368   10530 pod_ready.go:86] duration metric: took 399.399542ms for pod "kube-scheduler-addons-330674" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:31:32.272382   10530 pod_ready.go:40] duration metric: took 1.604672258s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 22:31:32.319206   10530 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 22:31:32.320852   10530 out.go:179] * Done! kubectl is now configured to use "addons-330674" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.794288985Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07bbee24-562d-4102-8744-7fedf9231718 name=/runtime.v1.RuntimeService/Version
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.796225745Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d85f9508-99c4-41e8-aedd-1653eaced10a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.797387003Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758926108797359657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519332,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d85f9508-99c4-41e8-aedd-1653eaced10a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.798030212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ecd256f-0eae-4d75-a878-1997dc56e902 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.798209301Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ecd256f-0eae-4d75-a878-1997dc56e902 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.798748710Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6b78ecb5174fb2b7f86cd2c4e767d94697649a74394ddfbca2309130d6eaa8c,PodSandboxId:b3f170d8fa06d1d92adb39a7915d41ba2dd5740703a6e0c23e6edf4dbe1e00e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758925895547677835,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 445fcb70-08b0-49c8-b65c-eda21a3d6feb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e668eda665a7a08b34feaa3af5faa4b856d4f2274289b47f59ac62775d884c7b,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1758925883829207657,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538a2e1c158d6e0ddd664a14b4f7f50a76ea8db010807a81cf19e75c642609c,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1758925882342359353,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ebcaf3e79e919d918b46cea972c486ae80c8d876a319eda1745f363dea05b5,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1758925877340520479,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041e5164edc9638ac7a5e3fb9b42dc3d246076e9c3024a78f6c14deca9aadc24,PodSandboxId:8725b0863596a05617b28f599b741b374f47553116849600ccb62872a79198c1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1758925876391663761,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-kbqsf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9dd82dc5-ecb0-431a-8606-e0b251a
33909,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:051406a4cc7e967f8fd03e63ab8bb5dbef64f6fb8e2ca56e77fac0d9cde5d0b0,PodSandboxId:1a394bb7ee033d4fc2928bdf9c7146d58a16612bf1d25551c70d873eb6356748,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0f
eee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758925872035993264,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vpbtt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae336bc2-9fe3-4fb6-993b-62ec6c833145,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4d7e9db5f9b62cd3f118452b38898613062052e6a73544db8762f91c8543664,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1758925868479357234,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c84352a2a8e683a652ad475b5e0f5655ca9517e730834ed24bbfd9441f90fe,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-r
egistrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1758925866899303215,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9c1d863f8e436fa0e4296d0bfefd78152e3f96d2caab6f0acb5c72f3c8dd4df,PodSandboxId:9dd7e0e52b989a77f23c4a05b1a811c382a9672cb44371141f84a6df218f03b9,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Im
age:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1758925865381688019,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b261b610-5540-4a39-af53-0a988f5316a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cd7128f9bd99749d9db7b92af36ff2646595fd5d80c6dea1069c4382a13d4a,PodSandboxId:d392af405e051900070076eccf981c9c49ee880242e8369dca1e725ea97a7fad,Metadata:&ContainerMetadata{Name:csi-resizer,Attemp
t:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1758925863323882758,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7afc9a-219f-4080-9fba-b24d07fadc30,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f6029288793e4abd0066aa6da1847c69b017784a5c35379a49b85eb7669403,PodSandboxId:5cdd7c9d00703096393b81c168e88cd01d6844aa45cc110a1814ee36f822d4fe,Metadata:&ContainerMetadata{N
ame:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1758925861639663693,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-n4kkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86602a14-6de0-44fe-99ba-f64d79426345,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065e1b8cc9a3878570b147f7b4019037cc2a1ce3c168b755fdcfa869fde88932,PodSandboxId:a815f5a2dbf404e19335
b4ed5bb0c565334c0dfd579d1b5cfb9d2ea7df6634f7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1758925861550751388,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-btkpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9d7b772-8f8e-4095-aaa6-fc9b1d68c681,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53bb00230c0915fb72c8594
f89b327ea93de60b21f74bb8bbea98be7af7d5c0,PodSandboxId:b1250bf09824f123677325475218e4cf4789bc966b6da72e7387e8d0c114dee5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758925859611809924,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2xzt8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e1bbf119-387c-430c-b64f-3412376a93d5,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},
&Container{Id:ad63674ee61b764de0b4292e936526c5fff997d5c918e38479959dd7ad66d185,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1758925859472001083,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a156c91664dbba69de3c62daeddadd17f3ea62e719400eb7575de0edc7b237,PodSandboxId:9afc50bd4655284b9f7792b29a82a64de5dedc47aca1be7f59ac0cdba9596cc2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1758925851071855426,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c5fsh,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 1d4706ed-d612-42b6-8ce7-1c3b53174964,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f894b8af4f888d798535ef9cc5ee6b212a47d7bab6e08119879d11066730833f,PodSandboxId:dad20b6b94d34d4317a16783526042b0b80a7cb5529d70d23bedc6c1e4128319,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1758925846456282561,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-5pmvk,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 07063f6d-dff7-4d19-a1cf-ac4db04a8027,},Annotations:
map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d1b73931795de08ae3fe25c28a68cc48cf2a1f358986388b7e68cef1254a49,PodSandboxId:4a03161ad649c86ef5f6fababc00d5c61e2b112f0745952010807bb23df9c76b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1758925842020813340,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d20fd4fa-
1f62-423e-a836-f66893f73949,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ce52a782ec641bc3054d3b7fecfbf5015f0255a42d1d8b2817d0e21a3cb64f,PodSandboxId:164540b56841d45bbea8b25fd820262a02ff3dc521d2483ef4d9fa6bf455840f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1758925807369906434,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device
-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cdb8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42dc693-f8dc-488e-a6df-11603c5146c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcddaa36c6f8e064b9e65b380137f789e7379644bdf02c4ce91a8481abe8aed,PodSandboxId:6f9b04761677876630b638de388847d0cd9b141a8301620dc9f0f8995da05593,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758925807170331625,Labels:map[string]string{io.kubernetes.container.name: storage-provisio
ner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805513c7-5529-4f0e-bbe6-de0e474ba2ba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d80adcca025aaef75e6e06f57e8799486cfe77e98b93797c20bec0f4dab49ed,PodSandboxId:4a821382e4a7e40f22aaab81e8bb96cf30745916ba0c162f9efbaed010997c81,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758925793811470387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
66bc5c9577-vcwdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a3371fb-cab7-4a7e-8907-e11b45338ed0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c093002446e01a4b5ed0e5bf25dd5e04c44bbdf58a99648d2615cbc9a8df29,PodSandboxId:e6bd3271dd6ac5f8ce745e3c6d5ed6c1c8b6e94486e2549e260561de7a8d9694,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674
df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758925792110209484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lldr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3500915-4e56-473c-8674-5ea502daaac6,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d546b62051d6981b70d7a64cf0bb498a74b8a5f034aea3d6ca372b748273dd08,PodSandboxId:423d307a9a2ff59da5cb2aee768cb0e27b277a107aac0035e742bc3536de2a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5
d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758925780689458095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b3ab0a34880a8a828bd4ec7b048073,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c14c61340bfb60319237ab9cdb7743d04777d104299829a2666627dc25b549ce,PodSandboxId:f8b0370a64577d26d2005616cef004867bab0ed7612bdb68674b97c0cd4ddc44,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758925780691387127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ebb9a6bc31069e8c997f3161744cee,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96b63fa3232c4e36cd45a617624415a34216ab78bd0288ce20498e29c613d
e46,PodSandboxId:00739f8fdf1571de344a91ed170311f30ce26aae40b8fd9e24b9f24e7340f067,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758925780648843877,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7596254403ac958c412ddaf08adf07c0,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:d71804cd6c0cd12a68a0fcc99788afd0951532dc500dcac6297763fb881c5193,PodSandboxId:a5800cbdc6985f866308b5ec875d6185a6c0c7223e4b69157d6014fad076bb3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758925780660818663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd7d325e4c1d60f88ed2ac4cd01e5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ecd256f-0eae-4d75-a878-1997dc56e902 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.804528096Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc91622d-d006-4cac-a831-9cb255e1eb94 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.805039302Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f0dfb340c7acc71722a923948703a88ff82fad69888199f5e1009ca653c168e2,Metadata:&PodSandboxMetadata{Name:task-pv-pod,Uid:6ceec17b-136a-4af6-8734-faa16ecd08bc,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758925939283518153,Labels:map[string]string{app: task-pv-pod,io.kubernetes.container.name: POD,io.kubernetes.pod.name: task-pv-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ceec17b-136a-4af6-8734-faa16ecd08bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:32:18.964343665Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0761334691d11b5cd6e2dbc992dc9ad3da90b958ec97e04f000689996dd3399b,Metadata:&PodSandboxMetadata{Name:test-local-path,Uid:8d821a63-845c-4938-9b63-a3f7ca3a23d9,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758925927735437081,Labels:map[stri
ng]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d821a63-845c-4938-9b63-a3f7ca3a23d9,run: test-local-path,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"run\":\"test-local-path\"},\"name\":\"test-local-path\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"command\":[\"sh\",\"-c\",\"echo 'local-path-provisioner' \\u003e /test/file1\"],\"image\":\"busybox:stable\",\"name\":\"busybox\",\"volumeMounts\":[{\"mountPath\":\"/test\",\"name\":\"data\"}]}],\"restartPolicy\":\"OnFailure\",\"volumes\":[{\"name\":\"data\",\"persistentVolumeClaim\":{\"claimName\":\"test-pvc\"}}]}}\n,kubernetes.io/config.seen: 2025-09-26T22:32:07.355016503Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:69e24080ba85f071cd6d09268c5dc2fbf53e2ffc83252495a776d8af697ca32e,Metadata:&PodSandboxMetadata{N
ame:nginx,Uid:cf3126e1-0cb8-4c12-8028-997b82450384,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758925927220299325,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cf3126e1-0cb8-4c12-8028-997b82450384,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:32:06.896045375Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b3f170d8fa06d1d92adb39a7915d41ba2dd5740703a6e0c23e6edf4dbe1e00e6,Metadata:&PodSandboxMetadata{Name:busybox,Uid:445fcb70-08b0-49c8-b65c-eda21a3d6feb,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758925893229375654,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 445fcb70-08b0-49c8-b65c-eda21a3d6feb,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:31:32.910425126Z,kubern
etes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8725b0863596a05617b28f599b741b374f47553116849600ccb62872a79198c1,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-9cc49f96f-kbqsf,Uid:9dd82dc5-ecb0-431a-8606-e0b251a33909,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758925866763165706,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-kbqsf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9dd82dc5-ecb0-431a-8606-e0b251a33909,pod-template-hash: 9cc49f96f,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:30:02.104736079Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9dd7e0e52b989a77f23c4a05b1a811c382a9672cb44371141f84a6df218f03b9,Metadata:&PodSandboxMetadata{Name:csi-hostpath-attacher-0,Uid:b261b61
0-5540-4a39-af53-0a988f5316a3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758925807145535441,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/name: csi-hostpath-attacher,apps.kubernetes.io/pod-index: 0,controller-revision-hash: csi-hostpath-attacher-576bccf57,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b261b610-5540-4a39-af53-0a988f5316a3,kubernetes.io/minikube-addons: csi-hostpath-driver,statefulset.kubernetes.io/pod-name: csi-hostpath-attacher-0,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:30:02.961217494Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d392af405e051900070076eccf981c9c49ee880242e8369dca1e725ea97a7fad,Metadata:&PodSandboxMetadata{Name:csi-hostpath-resizer-0,Uid:cc7afc9a-219f-4080-9fba-b24d07fadc30,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758925807133248253,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/name: csi-hostpath-resizer,apps.kubernetes.io/pod-index: 0,controller-revision-hash: csi-hostpath-resizer-5f4978ffc6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7afc9a-219f-4080-9fba-b24d07fadc30,kubernetes.io/minikube-addons: csi-hostpath-driver,statefulset.kubernetes.io/pod-name: csi-hostpath-resizer-0,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:30:03.297700360Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&PodSandboxMetadata{Name:csi-hostpathplugin-mk92b,Uid:98d7012b-de84-42ba-8ec1-3e1578c28cfd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758925804101139645,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/component: plugin,app.kubernetes.io/instanc
e: hostpath.csi.k8s.io,app.kubernetes.io/name: csi-hostpathplugin,app.kubernetes.io/part-of: csi-driver-host-path,controller-revision-hash: bfd669d76,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,kubernetes.io/minikube-addons: csi-hostpath-driver,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:30:03.176781968Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a815f5a2dbf404e19335b4ed5bb0c565334c0dfd579d1b5cfb9d2ea7df6634f7,Metadata:&PodSandboxMetadata{Name:snapshot-controller-7d9fbc56b8-btkpl,Uid:d9d7b772-8f8e-4095-aaa6-fc9b1d68c681,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758925802374024668,Labels:map[string]string{app: snapshot-controller,io.kubernetes.container.name: POD,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-btkpl,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: d9d7b772-8f8e-4095-aaa6-fc9b1d68c681,pod-template-hash: 7d9fbc56b8,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:30:01.651307174Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5cdd7c9d00703096393b81c168e88cd01d6844aa45cc110a1814ee36f822d4fe,Metadata:&PodSandboxMetadata{Name:snapshot-controller-7d9fbc56b8-n4kkw,Uid:86602a14-6de0-44fe-99ba-f64d79426345,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758925802334162727,Labels:map[string]string{app: snapshot-controller,io.kubernetes.container.name: POD,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-n4kkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86602a14-6de0-44fe-99ba-f64d79426345,pod-template-hash: 7d9fbc56b8,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:30:01.703654170Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9afc50bd4655284b9f7792b29a82a64de5dedc47aca1be7f59ac0cdba9596cc2,Metadata:&PodSandboxMet
adata{Name:gadget-c5fsh,Uid:1d4706ed-d612-42b6-8ce7-1c3b53174964,Namespace:gadget,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758925801006548356,Labels:map[string]string{controller-revision-hash: 5d99b94fd5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gadget-c5fsh,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 1d4706ed-d612-42b6-8ce7-1c3b53174964,k8s-app: gadget,pod-template-generation: 1,},Annotations:map[string]string{container.apparmor.security.beta.kubernetes.io/gadget: unconfined,kubernetes.io/config.seen: 2025-09-26T22:30:00.334268717Z,kubernetes.io/config.source: api,prometheus.io/path: /metrics,prometheus.io/port: 2223,prometheus.io/scrape: true,},RuntimeHandler:,},&PodSandbox{Id:4a03161ad649c86ef5f6fababc00d5c61e2b112f0745952010807bb23df9c76b,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:d20fd4fa-1f62-423e-a836-f66893f73949,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758925800918720327,Labels:map[string]string{app: minikube-ingress-dns,ap
p.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d20fd4fa-1f62-423e-a836-f66893f73949,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"na
me\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-ingress-dns-config-volume\"}]}}\n,kubernetes.io/config.seen: 2025-09-26T22:29:58.732802256Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dad20b6b94d34d4317a16783526042b0b80a7cb5529d70d23bedc6c1e4128319,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-648f6765c9-5pmvk,Uid:07063f6d-dff7-4d19-a1cf-ac4db04a8027,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758925800860639988,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-5pmvk,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 07063f6d-dff7-4d19-a1cf-ac4db04a8027,pod-template-hash: 648f6765c9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:29:59.868510448Z,kubernetes.io
/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6f9b04761677876630b638de388847d0cd9b141a8301620dc9f0f8995da05593,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:805513c7-5529-4f0e-bbe6-de0e474ba2ba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758925800034335610,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805513c7-5529-4f0e-bbe6-de0e474ba2ba,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisi
oner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-09-26T22:29:59.365216512Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:164540b56841d45bbea8b25fd820262a02ff3dc521d2483ef4d9fa6bf455840f,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-cdb8s,Uid:b42dc693-f8dc-488e-a6df-11603c5146c6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758925796779860359,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-cdb8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42dc693-f8dc-488e-a6df-11603c5146c6,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annot
ations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:29:55.795490000Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4a821382e4a7e40f22aaab81e8bb96cf30745916ba0c162f9efbaed010997c81,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-vcwdm,Uid:6a3371fb-cab7-4a7e-8907-e11b45338ed0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758925792562938176,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-vcwdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a3371fb-cab7-4a7e-8907-e11b45338ed0,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:29:52.178545051Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e6bd3271dd6ac5f8ce745e3c6d5ed6c1c8b6e94486e2549e260561de7a8d9694,Metadata:&PodSandboxMetadata{Name:kube-proxy-lldr6,Uid:e3500915-4e56-473c-8674-5ea502daaac6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READ
Y,CreatedAt:1758925791972970744,Labels:map[string]string{controller-revision-hash: 6f475c7966,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-lldr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3500915-4e56-473c-8674-5ea502daaac6,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:29:51.649974841Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:423d307a9a2ff59da5cb2aee768cb0e27b277a107aac0035e742bc3536de2a45,Metadata:&PodSandboxMetadata{Name:etcd-addons-330674,Uid:07b3ab0a34880a8a828bd4ec7b048073,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758925780124713649,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b3ab0a34880a8a828bd4ec7b048073,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client
-urls: https://192.168.39.36:2379,kubernetes.io/config.hash: 07b3ab0a34880a8a828bd4ec7b048073,kubernetes.io/config.seen: 2025-09-26T22:29:38.634825704Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a5800cbdc6985f866308b5ec875d6185a6c0c7223e4b69157d6014fad076bb3f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-330674,Uid:5cd7d325e4c1d60f88ed2ac4cd01e5f4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758925780123652474,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd7d325e4c1d60f88ed2ac4cd01e5f4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5cd7d325e4c1d60f88ed2ac4cd01e5f4,kubernetes.io/config.seen: 2025-09-26T22:29:38.634831216Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f8b0370a64577d26d2005616cef004867bab0ed7612bdb68674b97c0cd4ddc44,Metadata:&PodSa
ndboxMetadata{Name:kube-controller-manager-addons-330674,Uid:72ebb9a6bc31069e8c997f3161744cee,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758925780120279215,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ebb9a6bc31069e8c997f3161744cee,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 72ebb9a6bc31069e8c997f3161744cee,kubernetes.io/config.seen: 2025-09-26T22:29:38.634830407Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:00739f8fdf1571de344a91ed170311f30ce26aae40b8fd9e24b9f24e7340f067,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-330674,Uid:7596254403ac958c412ddaf08adf07c0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758925780118659334,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.na
me: kube-apiserver-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7596254403ac958c412ddaf08adf07c0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.36:8443,kubernetes.io/config.hash: 7596254403ac958c412ddaf08adf07c0,kubernetes.io/config.seen: 2025-09-26T22:29:38.634829275Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=cc91622d-d006-4cac-a831-9cb255e1eb94 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.806999357Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7434f00f-1a29-43c7-85f9-6ddaa52af75b name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.807060586Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7434f00f-1a29-43c7-85f9-6ddaa52af75b name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.807748640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6b78ecb5174fb2b7f86cd2c4e767d94697649a74394ddfbca2309130d6eaa8c,PodSandboxId:b3f170d8fa06d1d92adb39a7915d41ba2dd5740703a6e0c23e6edf4dbe1e00e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758925895547677835,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 445fcb70-08b0-49c8-b65c-eda21a3d6feb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e668eda665a7a08b34feaa3af5faa4b856d4f2274289b47f59ac62775d884c7b,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1758925883829207657,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538a2e1c158d6e0ddd664a14b4f7f50a76ea8db010807a81cf19e75c642609c,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1758925882342359353,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ebcaf3e79e919d918b46cea972c486ae80c8d876a319eda1745f363dea05b5,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1758925877340520479,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041e5164edc9638ac7a5e3fb9b42dc3d246076e9c3024a78f6c14deca9aadc24,PodSandboxId:8725b0863596a05617b28f599b741b374f47553116849600ccb62872a79198c1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1758925876391663761,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-kbqsf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9dd82dc5-ecb0-431a-8606-e0b251a
33909,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c4d7e9db5f9b62cd3f118452b38898613062052e6a73544db8762f91c8543664,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1758925868479357234,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c84352a2a8e683a652ad475b5e0f5655ca9517e730834ed24bbfd9441f90fe,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1758925866899303215,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9c1d863f8e436fa0e4296d0bfefd78152e3f96d2caab6f0acb5c72f3c8dd4df,PodSandboxId:9dd7e0e52b989a77f23c4a05b1a811c382a9672cb44371141f84a6df218f03b9,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:
0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1758925865381688019,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b261b610-5540-4a39-af53-0a988f5316a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cd7128f9bd99749d9db7b92af36ff2646595fd5d80c6dea1069c4382a13d4a,PodSandboxId:d392af405e051900070076eccf981c9c49ee880242e8369dca1e725ea97a7fad,Metadata:&ContainerMetadata{
Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1758925863323882758,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7afc9a-219f-4080-9fba-b24d07fadc30,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f6029288793e4abd0066aa6da1847c69b017784a5c35379a49b85eb7669403,PodSandboxId:5cdd7c9d00703096393b81c168e88cd01d6844aa45cc110a1814ee36f822d4fe,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1758925861639663693,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-n4kkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86602a14-6de0-44fe-99ba-f64d79426345,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065e1b8cc9a3878570b147f7b4019037cc2a1ce3c168b755fdcfa869fde88932,PodSandbox
Id:a815f5a2dbf404e19335b4ed5bb0c565334c0dfd579d1b5cfb9d2ea7df6634f7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1758925861550751388,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-btkpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9d7b772-8f8e-4095-aaa6-fc9b1d68c681,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a
d63674ee61b764de0b4292e936526c5fff997d5c918e38479959dd7ad66d185,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1758925859472001083,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a156c91664dbba69de3c62daeddadd17f3ea62e719400eb7575de0edc7b237,PodSandboxId:9afc50bd4655284b9f7792b29a82a64de5dedc47aca1be7f59ac0cdba9596cc2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1758925851071855426,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c5fsh,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 1d4706ed-d612-42b6-8ce7-1c3b53174964,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f894b8af4f888d798535ef9cc5ee6b212a47d7bab6e08119879d11066730833f,PodSandboxId:dad20b6b94d34d4317a16783526042b0b80a7cb5529d70d23bedc6c1e4128319,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1758925846456282561,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-5pmvk,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 07063f6d-dff7-4d19-a1cf-ac4db04a8027,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d1b73931795de08ae3fe25c28a68cc48cf2a1f358986388b7e68cef1254a49,PodSandboxId:4a03161ad649c86ef5f6fababc00d5c61e2b112f0745952010807bb23df9c76b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1758925842020813340,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d20fd4fa-1f62-423e-a836-
f66893f73949,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ce52a782ec641bc3054d3b7fecfbf5015f0255a42d1d8b2817d0e21a3cb64f,PodSandboxId:164540b56841d45bbea8b25fd820262a02ff3dc521d2483ef4d9fa6bf455840f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1758925807369906434,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kube
rnetes.pod.name: amd-gpu-device-plugin-cdb8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42dc693-f8dc-488e-a6df-11603c5146c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcddaa36c6f8e064b9e65b380137f789e7379644bdf02c4ce91a8481abe8aed,PodSandboxId:6f9b04761677876630b638de388847d0cd9b141a8301620dc9f0f8995da05593,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758925807170331625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernet
es.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805513c7-5529-4f0e-bbe6-de0e474ba2ba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d80adcca025aaef75e6e06f57e8799486cfe77e98b93797c20bec0f4dab49ed,PodSandboxId:4a821382e4a7e40f22aaab81e8bb96cf30745916ba0c162f9efbaed010997c81,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758925793811470387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vcwd
m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a3371fb-cab7-4a7e-8907-e11b45338ed0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c093002446e01a4b5ed0e5bf25dd5e04c44bbdf58a99648d2615cbc9a8df29,PodSandboxId:e6bd3271dd6ac5f8ce745e3c6d5ed6c1c8b6e94486e2549e260561de7a8d9694,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fed
e90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758925792110209484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lldr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3500915-4e56-473c-8674-5ea502daaac6,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d546b62051d6981b70d7a64cf0bb498a74b8a5f034aea3d6ca372b748273dd08,PodSandboxId:423d307a9a2ff59da5cb2aee768cb0e27b277a107aac0035e742bc3536de2a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758925780689458095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b3ab0a34880a8a828bd4ec7b048073,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c14c61340bfb60319237ab9cdb7743d04777d104299829a2666627dc25b549ce,PodSandboxId:f8b0370a64577d26d2005616cef004867bab0ed7612bdb68674b97c0cd4ddc44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0
,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758925780691387127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ebb9a6bc31069e8c997f3161744cee,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96b63fa3232c4e36cd45a617624415a34216ab78bd0288ce20498e29c613de46,PodSandboxI
d:00739f8fdf1571de344a91ed170311f30ce26aae40b8fd9e24b9f24e7340f067,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758925780648843877,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7596254403ac958c412ddaf08adf07c0,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:d71804cd6c0cd12a68a0fcc99788afd0951532dc500dcac6297763fb881c5193,PodSandboxId:a5800cbdc6985f866308b5ec875d6185a6c0c7223e4b69157d6014fad076bb3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758925780660818663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd7d325e4c1d60f88ed2ac4cd01e5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7434f00f-1a29-43c7-85f9-6ddaa52af75b name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.846443236Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee47be3f-8b9b-4489-8531-7846caaf0e0c name=/runtime.v1.RuntimeService/Version
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.846544659Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee47be3f-8b9b-4489-8531-7846caaf0e0c name=/runtime.v1.RuntimeService/Version
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.847855523Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ec9d4e8d-b278-4ffa-a3c6-7643c3bc1062 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.849816208Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758926108849760223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519332,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec9d4e8d-b278-4ffa-a3c6-7643c3bc1062 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.850628678Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4049eb02-900c-489d-81cb-78d092904871 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.850704142Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4049eb02-900c-489d-81cb-78d092904871 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.851449780Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6b78ecb5174fb2b7f86cd2c4e767d94697649a74394ddfbca2309130d6eaa8c,PodSandboxId:b3f170d8fa06d1d92adb39a7915d41ba2dd5740703a6e0c23e6edf4dbe1e00e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758925895547677835,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 445fcb70-08b0-49c8-b65c-eda21a3d6feb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e668eda665a7a08b34feaa3af5faa4b856d4f2274289b47f59ac62775d884c7b,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1758925883829207657,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538a2e1c158d6e0ddd664a14b4f7f50a76ea8db010807a81cf19e75c642609c,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1758925882342359353,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ebcaf3e79e919d918b46cea972c486ae80c8d876a319eda1745f363dea05b5,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1758925877340520479,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041e5164edc9638ac7a5e3fb9b42dc3d246076e9c3024a78f6c14deca9aadc24,PodSandboxId:8725b0863596a05617b28f599b741b374f47553116849600ccb62872a79198c1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1758925876391663761,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-kbqsf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9dd82dc5-ecb0-431a-8606-e0b251a
33909,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:051406a4cc7e967f8fd03e63ab8bb5dbef64f6fb8e2ca56e77fac0d9cde5d0b0,PodSandboxId:1a394bb7ee033d4fc2928bdf9c7146d58a16612bf1d25551c70d873eb6356748,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0f
eee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758925872035993264,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vpbtt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae336bc2-9fe3-4fb6-993b-62ec6c833145,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4d7e9db5f9b62cd3f118452b38898613062052e6a73544db8762f91c8543664,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1758925868479357234,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c84352a2a8e683a652ad475b5e0f5655ca9517e730834ed24bbfd9441f90fe,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-r
egistrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1758925866899303215,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9c1d863f8e436fa0e4296d0bfefd78152e3f96d2caab6f0acb5c72f3c8dd4df,PodSandboxId:9dd7e0e52b989a77f23c4a05b1a811c382a9672cb44371141f84a6df218f03b9,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Im
age:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1758925865381688019,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b261b610-5540-4a39-af53-0a988f5316a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cd7128f9bd99749d9db7b92af36ff2646595fd5d80c6dea1069c4382a13d4a,PodSandboxId:d392af405e051900070076eccf981c9c49ee880242e8369dca1e725ea97a7fad,Metadata:&ContainerMetadata{Name:csi-resizer,Attemp
t:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1758925863323882758,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7afc9a-219f-4080-9fba-b24d07fadc30,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f6029288793e4abd0066aa6da1847c69b017784a5c35379a49b85eb7669403,PodSandboxId:5cdd7c9d00703096393b81c168e88cd01d6844aa45cc110a1814ee36f822d4fe,Metadata:&ContainerMetadata{N
ame:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1758925861639663693,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-n4kkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86602a14-6de0-44fe-99ba-f64d79426345,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065e1b8cc9a3878570b147f7b4019037cc2a1ce3c168b755fdcfa869fde88932,PodSandboxId:a815f5a2dbf404e19335
b4ed5bb0c565334c0dfd579d1b5cfb9d2ea7df6634f7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1758925861550751388,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-btkpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9d7b772-8f8e-4095-aaa6-fc9b1d68c681,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53bb00230c0915fb72c8594
f89b327ea93de60b21f74bb8bbea98be7af7d5c0,PodSandboxId:b1250bf09824f123677325475218e4cf4789bc966b6da72e7387e8d0c114dee5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758925859611809924,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2xzt8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e1bbf119-387c-430c-b64f-3412376a93d5,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},
&Container{Id:ad63674ee61b764de0b4292e936526c5fff997d5c918e38479959dd7ad66d185,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1758925859472001083,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a156c91664dbba69de3c62daeddadd17f3ea62e719400eb7575de0edc7b237,PodSandboxId:9afc50bd4655284b9f7792b29a82a64de5dedc47aca1be7f59ac0cdba9596cc2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1758925851071855426,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c5fsh,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 1d4706ed-d612-42b6-8ce7-1c3b53174964,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f894b8af4f888d798535ef9cc5ee6b212a47d7bab6e08119879d11066730833f,PodSandboxId:dad20b6b94d34d4317a16783526042b0b80a7cb5529d70d23bedc6c1e4128319,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1758925846456282561,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-5pmvk,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 07063f6d-dff7-4d19-a1cf-ac4db04a8027,},Annotations:
map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d1b73931795de08ae3fe25c28a68cc48cf2a1f358986388b7e68cef1254a49,PodSandboxId:4a03161ad649c86ef5f6fababc00d5c61e2b112f0745952010807bb23df9c76b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1758925842020813340,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d20fd4fa-
1f62-423e-a836-f66893f73949,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ce52a782ec641bc3054d3b7fecfbf5015f0255a42d1d8b2817d0e21a3cb64f,PodSandboxId:164540b56841d45bbea8b25fd820262a02ff3dc521d2483ef4d9fa6bf455840f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1758925807369906434,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device
-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cdb8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42dc693-f8dc-488e-a6df-11603c5146c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcddaa36c6f8e064b9e65b380137f789e7379644bdf02c4ce91a8481abe8aed,PodSandboxId:6f9b04761677876630b638de388847d0cd9b141a8301620dc9f0f8995da05593,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758925807170331625,Labels:map[string]string{io.kubernetes.container.name: storage-provisio
ner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805513c7-5529-4f0e-bbe6-de0e474ba2ba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d80adcca025aaef75e6e06f57e8799486cfe77e98b93797c20bec0f4dab49ed,PodSandboxId:4a821382e4a7e40f22aaab81e8bb96cf30745916ba0c162f9efbaed010997c81,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758925793811470387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
66bc5c9577-vcwdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a3371fb-cab7-4a7e-8907-e11b45338ed0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c093002446e01a4b5ed0e5bf25dd5e04c44bbdf58a99648d2615cbc9a8df29,PodSandboxId:e6bd3271dd6ac5f8ce745e3c6d5ed6c1c8b6e94486e2549e260561de7a8d9694,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674
df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758925792110209484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lldr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3500915-4e56-473c-8674-5ea502daaac6,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d546b62051d6981b70d7a64cf0bb498a74b8a5f034aea3d6ca372b748273dd08,PodSandboxId:423d307a9a2ff59da5cb2aee768cb0e27b277a107aac0035e742bc3536de2a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5
d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758925780689458095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b3ab0a34880a8a828bd4ec7b048073,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c14c61340bfb60319237ab9cdb7743d04777d104299829a2666627dc25b549ce,PodSandboxId:f8b0370a64577d26d2005616cef004867bab0ed7612bdb68674b97c0cd4ddc44,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758925780691387127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ebb9a6bc31069e8c997f3161744cee,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96b63fa3232c4e36cd45a617624415a34216ab78bd0288ce20498e29c613d
e46,PodSandboxId:00739f8fdf1571de344a91ed170311f30ce26aae40b8fd9e24b9f24e7340f067,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758925780648843877,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7596254403ac958c412ddaf08adf07c0,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:d71804cd6c0cd12a68a0fcc99788afd0951532dc500dcac6297763fb881c5193,PodSandboxId:a5800cbdc6985f866308b5ec875d6185a6c0c7223e4b69157d6014fad076bb3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758925780660818663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd7d325e4c1d60f88ed2ac4cd01e5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4049eb02-900c-489d-81cb-78d092904871 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.895365049Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fbf7184a-6be0-49b6-adef-578e6dbd888e name=/runtime.v1.RuntimeService/Version
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.895722631Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fbf7184a-6be0-49b6-adef-578e6dbd888e name=/runtime.v1.RuntimeService/Version
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.897709380Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a791983-27f0-43ea-bb43-9a856eecb570 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.898875077Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758926108898847573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519332,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a791983-27f0-43ea-bb43-9a856eecb570 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.899842999Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ba30215-8e47-4b80-a1d0-31b171b11f49 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.899967031Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ba30215-8e47-4b80-a1d0-31b171b11f49 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:35:08 addons-330674 crio[823]: time="2025-09-26 22:35:08.901470097Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6b78ecb5174fb2b7f86cd2c4e767d94697649a74394ddfbca2309130d6eaa8c,PodSandboxId:b3f170d8fa06d1d92adb39a7915d41ba2dd5740703a6e0c23e6edf4dbe1e00e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758925895547677835,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 445fcb70-08b0-49c8-b65c-eda21a3d6feb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e668eda665a7a08b34feaa3af5faa4b856d4f2274289b47f59ac62775d884c7b,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1758925883829207657,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538a2e1c158d6e0ddd664a14b4f7f50a76ea8db010807a81cf19e75c642609c,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1758925882342359353,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4ebcaf3e79e919d918b46cea972c486ae80c8d876a319eda1745f363dea05b5,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1758925877340520479,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041e5164edc9638ac7a5e3fb9b42dc3d246076e9c3024a78f6c14deca9aadc24,PodSandboxId:8725b0863596a05617b28f599b741b374f47553116849600ccb62872a79198c1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1758925876391663761,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-kbqsf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9dd82dc5-ecb0-431a-8606-e0b251a
33909,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:051406a4cc7e967f8fd03e63ab8bb5dbef64f6fb8e2ca56e77fac0d9cde5d0b0,PodSandboxId:1a394bb7ee033d4fc2928bdf9c7146d58a16612bf1d25551c70d873eb6356748,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0f
eee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758925872035993264,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-vpbtt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae336bc2-9fe3-4fb6-993b-62ec6c833145,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4d7e9db5f9b62cd3f118452b38898613062052e6a73544db8762f91c8543664,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1758925868479357234,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c84352a2a8e683a652ad475b5e0f5655ca9517e730834ed24bbfd9441f90fe,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-r
egistrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1758925866899303215,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9c1d863f8e436fa0e4296d0bfefd78152e3f96d2caab6f0acb5c72f3c8dd4df,PodSandboxId:9dd7e0e52b989a77f23c4a05b1a811c382a9672cb44371141f84a6df218f03b9,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Im
age:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1758925865381688019,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b261b610-5540-4a39-af53-0a988f5316a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cd7128f9bd99749d9db7b92af36ff2646595fd5d80c6dea1069c4382a13d4a,PodSandboxId:d392af405e051900070076eccf981c9c49ee880242e8369dca1e725ea97a7fad,Metadata:&ContainerMetadata{Name:csi-resizer,Attemp
t:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1758925863323882758,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7afc9a-219f-4080-9fba-b24d07fadc30,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f6029288793e4abd0066aa6da1847c69b017784a5c35379a49b85eb7669403,PodSandboxId:5cdd7c9d00703096393b81c168e88cd01d6844aa45cc110a1814ee36f822d4fe,Metadata:&ContainerMetadata{N
ame:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1758925861639663693,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-n4kkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86602a14-6de0-44fe-99ba-f64d79426345,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065e1b8cc9a3878570b147f7b4019037cc2a1ce3c168b755fdcfa869fde88932,PodSandboxId:a815f5a2dbf404e19335
b4ed5bb0c565334c0dfd579d1b5cfb9d2ea7df6634f7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1758925861550751388,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-btkpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9d7b772-8f8e-4095-aaa6-fc9b1d68c681,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53bb00230c0915fb72c8594
f89b327ea93de60b21f74bb8bbea98be7af7d5c0,PodSandboxId:b1250bf09824f123677325475218e4cf4789bc966b6da72e7387e8d0c114dee5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758925859611809924,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2xzt8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e1bbf119-387c-430c-b64f-3412376a93d5,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},
&Container{Id:ad63674ee61b764de0b4292e936526c5fff997d5c918e38479959dd7ad66d185,PodSandboxId:c2124a5b8f4d4f16a1fab6ea805142d0dc208b4018e4b327923c7f8e15aaa501,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1758925859472001083,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-mk92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d7012b-de84-42ba-8ec1-3e1578c28cfd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a156c91664dbba69de3c62daeddadd17f3ea62e719400eb7575de0edc7b237,PodSandboxId:9afc50bd4655284b9f7792b29a82a64de5dedc47aca1be7f59ac0cdba9596cc2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1758925851071855426,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c5fsh,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 1d4706ed-d612-42b6-8ce7-1c3b53174964,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f894b8af4f888d798535ef9cc5ee6b212a47d7bab6e08119879d11066730833f,PodSandboxId:dad20b6b94d34d4317a16783526042b0b80a7cb5529d70d23bedc6c1e4128319,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1758925846456282561,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-5pmvk,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 07063f6d-dff7-4d19-a1cf-ac4db04a8027,},Annotations:
map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08d1b73931795de08ae3fe25c28a68cc48cf2a1f358986388b7e68cef1254a49,PodSandboxId:4a03161ad649c86ef5f6fababc00d5c61e2b112f0745952010807bb23df9c76b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1758925842020813340,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d20fd4fa-
1f62-423e-a836-f66893f73949,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ce52a782ec641bc3054d3b7fecfbf5015f0255a42d1d8b2817d0e21a3cb64f,PodSandboxId:164540b56841d45bbea8b25fd820262a02ff3dc521d2483ef4d9fa6bf455840f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1758925807369906434,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device
-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-cdb8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b42dc693-f8dc-488e-a6df-11603c5146c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcddaa36c6f8e064b9e65b380137f789e7379644bdf02c4ce91a8481abe8aed,PodSandboxId:6f9b04761677876630b638de388847d0cd9b141a8301620dc9f0f8995da05593,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758925807170331625,Labels:map[string]string{io.kubernetes.container.name: storage-provisio
ner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805513c7-5529-4f0e-bbe6-de0e474ba2ba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d80adcca025aaef75e6e06f57e8799486cfe77e98b93797c20bec0f4dab49ed,PodSandboxId:4a821382e4a7e40f22aaab81e8bb96cf30745916ba0c162f9efbaed010997c81,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758925793811470387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
66bc5c9577-vcwdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a3371fb-cab7-4a7e-8907-e11b45338ed0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c093002446e01a4b5ed0e5bf25dd5e04c44bbdf58a99648d2615cbc9a8df29,PodSandboxId:e6bd3271dd6ac5f8ce745e3c6d5ed6c1c8b6e94486e2549e260561de7a8d9694,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674
df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758925792110209484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lldr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3500915-4e56-473c-8674-5ea502daaac6,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d546b62051d6981b70d7a64cf0bb498a74b8a5f034aea3d6ca372b748273dd08,PodSandboxId:423d307a9a2ff59da5cb2aee768cb0e27b277a107aac0035e742bc3536de2a45,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5
d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758925780689458095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b3ab0a34880a8a828bd4ec7b048073,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c14c61340bfb60319237ab9cdb7743d04777d104299829a2666627dc25b549ce,PodSandboxId:f8b0370a64577d26d2005616cef004867bab0ed7612bdb68674b97c0cd4ddc44,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758925780691387127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72ebb9a6bc31069e8c997f3161744cee,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96b63fa3232c4e36cd45a617624415a34216ab78bd0288ce20498e29c613d
e46,PodSandboxId:00739f8fdf1571de344a91ed170311f30ce26aae40b8fd9e24b9f24e7340f067,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758925780648843877,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7596254403ac958c412ddaf08adf07c0,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:d71804cd6c0cd12a68a0fcc99788afd0951532dc500dcac6297763fb881c5193,PodSandboxId:a5800cbdc6985f866308b5ec875d6185a6c0c7223e4b69157d6014fad076bb3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758925780660818663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-330674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd7d325e4c1d60f88ed2ac4cd01e5f4,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8ba30215-8e47-4b80-a1d0-31b171b11f49 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	c6b78ecb5174f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          3 minutes ago       Running             busybox                                  0                   b3f170d8fa06d       busybox
	e668eda665a7a       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          3 minutes ago       Running             csi-snapshotter                          0                   c2124a5b8f4d4       csi-hostpathplugin-mk92b
	b538a2e1c158d       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago       Running             csi-provisioner                          0                   c2124a5b8f4d4       csi-hostpathplugin-mk92b
	a4ebcaf3e79e9       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago       Running             liveness-probe                           0                   c2124a5b8f4d4       csi-hostpathplugin-mk92b
	041e5164edc96       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             3 minutes ago       Running             controller                               0                   8725b0863596a       ingress-nginx-controller-9cc49f96f-kbqsf
	051406a4cc7e9       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                                             3 minutes ago       Exited              patch                                    2                   1a394bb7ee033       ingress-nginx-admission-patch-vpbtt
	c4d7e9db5f9b6       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           4 minutes ago       Running             hostpath                                 0                   c2124a5b8f4d4       csi-hostpathplugin-mk92b
	a2c84352a2a8e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                4 minutes ago       Running             node-driver-registrar                    0                   c2124a5b8f4d4       csi-hostpathplugin-mk92b
	a9c1d863f8e43       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             4 minutes ago       Running             csi-attacher                             0                   9dd7e0e52b989       csi-hostpath-attacher-0
	f0cd7128f9bd9       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              4 minutes ago       Running             csi-resizer                              0                   d392af405e051       csi-hostpath-resizer-0
	71f6029288793       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      4 minutes ago       Running             volume-snapshot-controller               0                   5cdd7c9d00703       snapshot-controller-7d9fbc56b8-n4kkw
	065e1b8cc9a38       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      4 minutes ago       Running             volume-snapshot-controller               0                   a815f5a2dbf40       snapshot-controller-7d9fbc56b8-btkpl
	d53bb00230c09       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   4 minutes ago       Exited              create                                   0                   b1250bf09824f       ingress-nginx-admission-create-2xzt8
	ad63674ee61b7       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   4 minutes ago       Running             csi-external-health-monitor-controller   0                   c2124a5b8f4d4       csi-hostpathplugin-mk92b
	79a156c91664d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            4 minutes ago       Running             gadget                                   0                   9afc50bd46552       gadget-c5fsh
	f894b8af4f888       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             4 minutes ago       Running             local-path-provisioner                   0                   dad20b6b94d34       local-path-provisioner-648f6765c9-5pmvk
	08d1b73931795       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               4 minutes ago       Running             minikube-ingress-dns                     0                   4a03161ad649c       kube-ingress-dns-minikube
	22ce52a782ec6       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     5 minutes ago       Running             amd-gpu-device-plugin                    0                   164540b56841d       amd-gpu-device-plugin-cdb8s
	7dcddaa36c6f8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             5 minutes ago       Running             storage-provisioner                      0                   6f9b047616778       storage-provisioner
	4d80adcca025a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             5 minutes ago       Running             coredns                                  0                   4a821382e4a7e       coredns-66bc5c9577-vcwdm
	91c093002446e       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                                             5 minutes ago       Running             kube-proxy                               0                   e6bd3271dd6ac       kube-proxy-lldr6
	c14c61340bfb6       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                                             5 minutes ago       Running             kube-controller-manager                  0                   f8b0370a64577       kube-controller-manager-addons-330674
	d546b62051d69       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             5 minutes ago       Running             etcd                                     0                   423d307a9a2ff       etcd-addons-330674
	d71804cd6c0cd       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                                             5 minutes ago       Running             kube-scheduler                           0                   a5800cbdc6985       kube-scheduler-addons-330674
	96b63fa3232c4       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                                             5 minutes ago       Running             kube-apiserver                           0                   00739f8fdf157       kube-apiserver-addons-330674
	
	
	==> coredns [4d80adcca025aaef75e6e06f57e8799486cfe77e98b93797c20bec0f4dab49ed] <==
	[INFO] 10.244.0.8:47574 - 21867 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000257654s
	[INFO] 10.244.0.8:47574 - 26774 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000212097s
	[INFO] 10.244.0.8:47574 - 28009 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.001958391s
	[INFO] 10.244.0.8:47574 - 26885 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000119263s
	[INFO] 10.244.0.8:47574 - 5147 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000092914s
	[INFO] 10.244.0.8:47574 - 63848 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00014382s
	[INFO] 10.244.0.8:47574 - 22153 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00016125s
	[INFO] 10.244.0.8:33854 - 30980 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000169209s
	[INFO] 10.244.0.8:33854 - 31323 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000369966s
	[INFO] 10.244.0.8:44393 - 54969 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066964s
	[INFO] 10.244.0.8:44393 - 55232 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000145932s
	[INFO] 10.244.0.8:38008 - 63546 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000148543s
	[INFO] 10.244.0.8:38008 - 63995 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000188374s
	[INFO] 10.244.0.8:57521 - 19791 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000072445s
	[INFO] 10.244.0.8:57521 - 19991 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000105577s
	[INFO] 10.244.0.23:33438 - 31331 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00059389s
	[INFO] 10.244.0.23:52290 - 40336 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000131355s
	[INFO] 10.244.0.23:36973 - 47600 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124178s
	[INFO] 10.244.0.23:58766 - 34961 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000284537s
	[INFO] 10.244.0.23:51619 - 10278 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000077755s
	[INFO] 10.244.0.23:56734 - 63793 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000152417s
	[INFO] 10.244.0.23:44833 - 26370 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000890787s
	[INFO] 10.244.0.23:51260 - 4851 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001537806s
	[INFO] 10.244.0.26:37540 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000260275s
	[INFO] 10.244.0.26:54969 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00023223s
	
	
	==> describe nodes <==
	Name:               addons-330674
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-330674
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=addons-330674
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_29_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-330674
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-330674"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:29:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-330674
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:35:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:35:04 +0000   Fri, 26 Sep 2025 22:29:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:35:04 +0000   Fri, 26 Sep 2025 22:29:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:35:04 +0000   Fri, 26 Sep 2025 22:29:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:35:04 +0000   Fri, 26 Sep 2025 22:29:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.36
	  Hostname:    addons-330674
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 0270d5ce774d47cc84b7b73291b9eb86
	  System UUID:                0270d5ce-774d-47cc-84b7-b73291b9eb86
	  Boot ID:                    261e85a6-9bd4-4867-9bbb-7559b9c83c19
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	  default                     test-local-path                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  gadget                      gadget-c5fsh                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-kbqsf    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m7s
	  kube-system                 amd-gpu-device-plugin-cdb8s                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 coredns-66bc5c9577-vcwdm                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m17s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 csi-hostpathplugin-mk92b                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 etcd-addons-330674                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m23s
	  kube-system                 kube-apiserver-addons-330674                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-controller-manager-addons-330674       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-proxy-lldr6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-scheduler-addons-330674                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 snapshot-controller-7d9fbc56b8-btkpl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 snapshot-controller-7d9fbc56b8-n4kkw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  local-path-storage          local-path-provisioner-648f6765c9-5pmvk     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m31s (x8 over 5m31s)  kubelet          Node addons-330674 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m31s (x8 over 5m31s)  kubelet          Node addons-330674 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m31s (x7 over 5m31s)  kubelet          Node addons-330674 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m23s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m23s                  kubelet          Node addons-330674 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m23s                  kubelet          Node addons-330674 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m23s                  kubelet          Node addons-330674 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m22s                  kubelet          Node addons-330674 status is now: NodeReady
	  Normal  RegisteredNode           5m19s                  node-controller  Node addons-330674 event: Registered Node addons-330674 in Controller
	
	
	==> dmesg <==
	[  +0.132133] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.398131] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.148040] kauditd_printk_skb: 243 callbacks suppressed
	[Sep26 22:30] kauditd_printk_skb: 245 callbacks suppressed
	[  +0.000005] kauditd_printk_skb: 357 callbacks suppressed
	[ +15.526203] kauditd_printk_skb: 172 callbacks suppressed
	[  +5.602328] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.205959] kauditd_printk_skb: 32 callbacks suppressed
	[  +8.429608] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.063342] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.131781] kauditd_printk_skb: 20 callbacks suppressed
	[Sep26 22:31] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.000062] kauditd_printk_skb: 113 callbacks suppressed
	[  +1.652404] kauditd_printk_skb: 121 callbacks suppressed
	[  +3.064622] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.314802] kauditd_printk_skb: 89 callbacks suppressed
	[  +5.826353] kauditd_printk_skb: 5 callbacks suppressed
	[  +2.328743] kauditd_printk_skb: 38 callbacks suppressed
	[  +8.807828] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.004597] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.597860] kauditd_printk_skb: 38 callbacks suppressed
	[Sep26 22:32] kauditd_printk_skb: 99 callbacks suppressed
	[  +0.934983] kauditd_printk_skb: 118 callbacks suppressed
	[  +0.000162] kauditd_printk_skb: 173 callbacks suppressed
	[ +19.393202] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [d546b62051d6981b70d7a64cf0bb498a74b8a5f034aea3d6ca372b748273dd08] <==
	{"level":"info","ts":"2025-09-26T22:30:41.876982Z","caller":"traceutil/trace.go:172","msg":"trace[1158650298] transaction","detail":"{read_only:false; response_revision:991; number_of_response:1; }","duration":"217.468457ms","start":"2025-09-26T22:30:41.659503Z","end":"2025-09-26T22:30:41.876971Z","steps":["trace[1158650298] 'process raft request'  (duration: 217.358381ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:30:41.877725Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.834403ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:30:41.878678Z","caller":"traceutil/trace.go:172","msg":"trace[133223068] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:991; }","duration":"206.79653ms","start":"2025-09-26T22:30:41.671867Z","end":"2025-09-26T22:30:41.878664Z","steps":["trace[133223068] 'agreement among raft nodes before linearized reading'  (duration: 205.301511ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:30:58.688471Z","caller":"traceutil/trace.go:172","msg":"trace[1166098963] transaction","detail":"{read_only:false; response_revision:1047; number_of_response:1; }","duration":"115.285521ms","start":"2025-09-26T22:30:58.573171Z","end":"2025-09-26T22:30:58.688457Z","steps":["trace[1166098963] 'process raft request'  (duration: 115.158938ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:06.385950Z","caller":"traceutil/trace.go:172","msg":"trace[171528856] transaction","detail":"{read_only:false; response_revision:1110; number_of_response:1; }","duration":"207.371807ms","start":"2025-09-26T22:31:06.178555Z","end":"2025-09-26T22:31:06.385927Z","steps":["trace[171528856] 'process raft request'  (duration: 207.21509ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:13.467583Z","caller":"traceutil/trace.go:172","msg":"trace[2032340984] transaction","detail":"{read_only:false; response_revision:1152; number_of_response:1; }","duration":"148.79533ms","start":"2025-09-26T22:31:13.318772Z","end":"2025-09-26T22:31:13.467568Z","steps":["trace[2032340984] 'process raft request'  (duration: 148.072718ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:14.637720Z","caller":"traceutil/trace.go:172","msg":"trace[1422923240] linearizableReadLoop","detail":"{readStateIndex:1185; appliedIndex:1185; }","duration":"228.404518ms","start":"2025-09-26T22:31:14.409297Z","end":"2025-09-26T22:31:14.637701Z","steps":["trace[1422923240] 'read index received'  (duration: 228.396687ms)","trace[1422923240] 'applied index is now lower than readState.Index'  (duration: 6.717µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:31:14.637858Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"228.541405ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:31:14.637889Z","caller":"traceutil/trace.go:172","msg":"trace[1734423282] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1152; }","duration":"228.589602ms","start":"2025-09-26T22:31:14.409293Z","end":"2025-09-26T22:31:14.637882Z","steps":["trace[1734423282] 'agreement among raft nodes before linearized reading'  (duration: 228.514609ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:14.637888Z","caller":"traceutil/trace.go:172","msg":"trace[1864404804] transaction","detail":"{read_only:false; response_revision:1153; number_of_response:1; }","duration":"251.449676ms","start":"2025-09-26T22:31:14.386428Z","end":"2025-09-26T22:31:14.637877Z","steps":["trace[1864404804] 'process raft request'  (duration: 251.335525ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:31:14.638161Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"167.799737ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-09-26T22:31:14.638184Z","caller":"traceutil/trace.go:172","msg":"trace[586291321] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1153; }","duration":"167.828944ms","start":"2025-09-26T22:31:14.470349Z","end":"2025-09-26T22:31:14.638178Z","steps":["trace[586291321] 'agreement among raft nodes before linearized reading'  (duration: 167.686895ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:16.005233Z","caller":"traceutil/trace.go:172","msg":"trace[1859190441] linearizableReadLoop","detail":"{readStateIndex:1191; appliedIndex:1191; }","duration":"205.698958ms","start":"2025-09-26T22:31:15.799518Z","end":"2025-09-26T22:31:16.005217Z","steps":["trace[1859190441] 'read index received'  (duration: 205.694211ms)","trace[1859190441] 'applied index is now lower than readState.Index'  (duration: 3.689µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:31:16.005429Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.897121ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:31:16.005489Z","caller":"traceutil/trace.go:172","msg":"trace[1859758599] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1158; }","duration":"205.970508ms","start":"2025-09-26T22:31:15.799512Z","end":"2025-09-26T22:31:16.005483Z","steps":["trace[1859758599] 'agreement among raft nodes before linearized reading'  (duration: 205.868975ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:31:16.005819Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"152.13092ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.36\" limit:1 ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2025-09-26T22:31:16.005907Z","caller":"traceutil/trace.go:172","msg":"trace[658261611] range","detail":"{range_begin:/registry/masterleases/192.168.39.36; range_end:; response_count:1; response_revision:1159; }","duration":"152.225874ms","start":"2025-09-26T22:31:15.853673Z","end":"2025-09-26T22:31:16.005899Z","steps":["trace[658261611] 'agreement among raft nodes before linearized reading'  (duration: 152.075231ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:16.006294Z","caller":"traceutil/trace.go:172","msg":"trace[630460783] transaction","detail":"{read_only:false; response_revision:1159; number_of_response:1; }","duration":"208.955996ms","start":"2025-09-26T22:31:15.797328Z","end":"2025-09-26T22:31:16.006284Z","steps":["trace[630460783] 'process raft request'  (duration: 207.967404ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:20.645825Z","caller":"traceutil/trace.go:172","msg":"trace[1825086522] transaction","detail":"{read_only:false; response_revision:1196; number_of_response:1; }","duration":"142.24064ms","start":"2025-09-26T22:31:20.503572Z","end":"2025-09-26T22:31:20.645813Z","steps":["trace[1825086522] 'process raft request'  (duration: 142.114273ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:29.646399Z","caller":"traceutil/trace.go:172","msg":"trace[1097200160] transaction","detail":"{read_only:false; response_revision:1236; number_of_response:1; }","duration":"169.236279ms","start":"2025-09-26T22:31:29.477137Z","end":"2025-09-26T22:31:29.646373Z","steps":["trace[1097200160] 'process raft request'  (duration: 169.149315ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:59.038214Z","caller":"traceutil/trace.go:172","msg":"trace[287194860] linearizableReadLoop","detail":"{readStateIndex:1476; appliedIndex:1476; }","duration":"165.591492ms","start":"2025-09-26T22:31:58.872592Z","end":"2025-09-26T22:31:59.038183Z","steps":["trace[287194860] 'read index received'  (duration: 165.586106ms)","trace[287194860] 'applied index is now lower than readState.Index'  (duration: 4.553µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:31:59.038434Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.843902ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:31:59.038513Z","caller":"traceutil/trace.go:172","msg":"trace[1185068248] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1431; }","duration":"165.936326ms","start":"2025-09-26T22:31:58.872567Z","end":"2025-09-26T22:31:59.038503Z","steps":["trace[1185068248] 'agreement among raft nodes before linearized reading'  (duration: 165.795637ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:59.038517Z","caller":"traceutil/trace.go:172","msg":"trace[270500941] transaction","detail":"{read_only:false; response_revision:1432; number_of_response:1; }","duration":"228.1352ms","start":"2025-09-26T22:31:58.810371Z","end":"2025-09-26T22:31:59.038506Z","steps":["trace[270500941] 'process raft request'  (duration: 227.991076ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:32:29.353426Z","caller":"traceutil/trace.go:172","msg":"trace[1234175230] transaction","detail":"{read_only:false; response_revision:1649; number_of_response:1; }","duration":"102.357293ms","start":"2025-09-26T22:32:29.251056Z","end":"2025-09-26T22:32:29.353413Z","steps":["trace[1234175230] 'process raft request'  (duration: 102.235629ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:35:09 up 5 min,  0 users,  load average: 0.18, 0.87, 0.53
	Linux addons-330674 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [96b63fa3232c4e36cd45a617624415a34216ab78bd0288ce20498e29c613de46] <==
	W0926 22:30:20.985331       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0926 22:30:21.003927       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I0926 22:30:43.375116       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0926 22:30:46.463863       1 handler_proxy.go:99] no RequestInfo found in the context
	E0926 22:30:46.464001       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0926 22:30:46.466655       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.239.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.239.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.239.144:443: connect: connection refused" logger="UnhandledError"
	E0926 22:30:46.470932       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.239.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.239.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.239.144:443: connect: connection refused" logger="UnhandledError"
	E0926 22:30:46.472988       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.239.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.239.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.239.144:443: connect: connection refused" logger="UnhandledError"
	I0926 22:30:46.609462       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0926 22:31:07.128656       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0926 22:31:41.108423       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:44004: use of closed network connection
	E0926 22:31:41.314606       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:44040: use of closed network connection
	I0926 22:31:45.954847       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:31:50.616495       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.104.222"}
	I0926 22:32:06.691946       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0926 22:32:06.935381       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.87.170"}
	I0926 22:32:31.077795       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:32:47.481193       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0926 22:33:04.609647       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:33:34.675154       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:34:33.914198       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:34:37.466185       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [c14c61340bfb60319237ab9cdb7743d04777d104299829a2666627dc25b549ce] <==
	I0926 22:29:50.964288       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0926 22:29:50.964435       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0926 22:29:50.964792       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0926 22:29:50.965035       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0926 22:29:50.965437       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0926 22:29:50.965524       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0926 22:29:50.965748       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0926 22:29:50.966394       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0926 22:29:50.966608       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0926 22:29:50.966942       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0926 22:29:50.972568       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0926 22:29:50.975319       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0926 22:29:50.987415       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	E0926 22:29:59.198407       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E0926 22:30:20.932946       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 22:30:20.933180       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0926 22:30:20.933255       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0926 22:30:20.964612       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0926 22:30:20.972932       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0926 22:30:21.033427       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:30:21.073363       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0926 22:31:50.680482       1 replica_set.go:587] "Unhandled Error" err="sync \"headlamp/headlamp-85f8f8dc54\" failed with pods \"headlamp-85f8f8dc54-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found" logger="UnhandledError"
	I0926 22:31:54.682707       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0926 22:32:05.933470       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0926 22:32:17.836527       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	
	
	==> kube-proxy [91c093002446e01a4b5ed0e5bf25dd5e04c44bbdf58a99648d2615cbc9a8df29] <==
	I0926 22:29:52.750738       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:29:52.855140       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:29:52.855184       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.36"]
	E0926 22:29:52.855251       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:29:53.034433       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0926 22:29:53.034497       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0926 22:29:53.034529       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:29:53.056167       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:29:53.056873       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:29:53.056887       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:29:53.081717       1 config.go:309] "Starting node config controller"
	I0926 22:29:53.081753       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:29:53.081761       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:29:53.082169       1 config.go:200] "Starting service config controller"
	I0926 22:29:53.082179       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:29:53.082197       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:29:53.082201       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:29:53.082211       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:29:53.082215       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:29:53.183212       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0926 22:29:53.183245       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:29:53.183259       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d71804cd6c0cd12a68a0fcc99788afd0951532dc500dcac6297763fb881c5193] <==
	E0926 22:29:43.950921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:29:43.950997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:29:43.952216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0926 22:29:43.952553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:29:43.952622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:29:43.953940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:29:43.954122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:29:43.954127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 22:29:43.955446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:29:43.955726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:29:43.955808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:29:43.956032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:29:43.956048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:29:44.761681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:29:44.783680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:29:44.813163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:29:44.863573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:29:44.938817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 22:29:44.949980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:29:45.133806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:29:45.176477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:29:45.243697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:29:45.335227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:29:45.431436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0926 22:29:48.238209       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 26 22:34:07 addons-330674 kubelet[1505]: E0926 22:34:07.029432    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926047028000054  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:34:07 addons-330674 kubelet[1505]: E0926 22:34:07.029550    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926047028000054  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:34:12 addons-330674 kubelet[1505]: I0926 22:34:12.804586    1505 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 26 22:34:17 addons-330674 kubelet[1505]: E0926 22:34:17.032294    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926057031739327  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:34:17 addons-330674 kubelet[1505]: E0926 22:34:17.032351    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926057031739327  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:34:22 addons-330674 kubelet[1505]: E0926 22:34:22.333325    1505 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 26 22:34:22 addons-330674 kubelet[1505]: E0926 22:34:22.333385    1505 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 26 22:34:22 addons-330674 kubelet[1505]: E0926 22:34:22.333596    1505 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(cf3126e1-0cb8-4c12-8028-997b82450384): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:34:22 addons-330674 kubelet[1505]: E0926 22:34:22.333629    1505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cf3126e1-0cb8-4c12-8028-997b82450384"
	Sep 26 22:34:27 addons-330674 kubelet[1505]: E0926 22:34:27.036380    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926067034797226  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:34:27 addons-330674 kubelet[1505]: E0926 22:34:27.036426    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926067034797226  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:34:34 addons-330674 kubelet[1505]: E0926 22:34:34.807579    1505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cf3126e1-0cb8-4c12-8028-997b82450384"
	Sep 26 22:34:37 addons-330674 kubelet[1505]: E0926 22:34:37.040404    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926077039380910  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:34:37 addons-330674 kubelet[1505]: E0926 22:34:37.040492    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926077039380910  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:34:47 addons-330674 kubelet[1505]: E0926 22:34:47.045125    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926087043797383  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:34:47 addons-330674 kubelet[1505]: E0926 22:34:47.046129    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926087043797383  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:34:52 addons-330674 kubelet[1505]: E0926 22:34:52.425726    1505 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Sep 26 22:34:52 addons-330674 kubelet[1505]: E0926 22:34:52.426201    1505 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Sep 26 22:34:52 addons-330674 kubelet[1505]: E0926 22:34:52.426520    1505 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod test-local-path_default(8d821a63-845c-4938-9b63-a3f7ca3a23d9): ErrImagePull: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:34:52 addons-330674 kubelet[1505]: E0926 22:34:52.426575    1505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8d821a63-845c-4938-9b63-a3f7ca3a23d9"
	Sep 26 22:34:57 addons-330674 kubelet[1505]: E0926 22:34:57.050886    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926097049618908  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:34:57 addons-330674 kubelet[1505]: E0926 22:34:57.050944    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926097049618908  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:35:07 addons-330674 kubelet[1505]: E0926 22:35:07.054558    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926107053551413  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:35:07 addons-330674 kubelet[1505]: E0926 22:35:07.054586    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926107053551413  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:519332}  inodes_used:{value:186}}"
	Sep 26 22:35:07 addons-330674 kubelet[1505]: E0926 22:35:07.807669    1505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8d821a63-845c-4938-9b63-a3f7ca3a23d9"
	
	
	==> storage-provisioner [7dcddaa36c6f8e064b9e65b380137f789e7379644bdf02c4ce91a8481abe8aed] <==
	W0926 22:34:44.124153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:46.130338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:46.140055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:48.144621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:48.152924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:50.156956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:50.162576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:52.167207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:52.173198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:54.177534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:54.183448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:56.189132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:56.200285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:58.205251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:58.213879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:00.218335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:00.224649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:02.229620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:02.235448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:04.239632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:04.248143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:06.251672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:06.257964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:08.263287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:08.270352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-330674 -n addons-330674
helpers_test.go:269: (dbg) Run:  kubectl --context addons-330674 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-2xzt8 ingress-nginx-admission-patch-vpbtt
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-330674 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-2xzt8 ingress-nginx-admission-patch-vpbtt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-330674 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-2xzt8 ingress-nginx-admission-patch-vpbtt: exit status 1 (110.692634ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-330674/192.168.39.36
	Start Time:       Fri, 26 Sep 2025 22:32:06 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xvdz7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xvdz7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m4s                 default-scheduler  Successfully assigned default/nginx to addons-330674
	  Warning  Failed     48s (x2 over 2m33s)  kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     48s (x2 over 2m33s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    36s (x2 over 2m32s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     36s (x2 over 2m32s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    23s (x3 over 3m3s)   kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-330674/192.168.39.36
	Start Time:       Fri, 26 Sep 2025 22:32:18 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pzlv4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-pzlv4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m52s                default-scheduler  Successfully assigned default/task-pv-pod to addons-330674
	  Warning  Failed     92s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     92s                  kubelet            Error: ErrImagePull
	  Normal   BackOff    91s                  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     91s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    77s (x2 over 2m51s)  kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-330674/192.168.39.36
	Start Time:       Fri, 26 Sep 2025 22:32:07 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gbhvc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-gbhvc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m3s                 default-scheduler  Successfully assigned default/test-local-path to addons-330674
	  Normal   Pulling    109s (x2 over 3m2s)  kubelet            Pulling image "busybox:stable"
	  Warning  Failed     18s (x2 over 2m2s)   kubelet            Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     18s (x2 over 2m2s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    3s (x2 over 2m2s)    kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     3s (x2 over 2m2s)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2xzt8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vpbtt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-330674 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-2xzt8 ingress-nginx-admission-patch-vpbtt: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-330674 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-330674 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.033203304s)
--- FAIL: TestAddons/parallel/LocalPath (232.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-615476 --alsologtostderr -v=1]
E0926 22:49:16.831853    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:51:32.968984    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:52:00.674121    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-615476 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-615476 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-615476 --alsologtostderr -v=1] stderr:
I0926 22:49:13.396060   19620 out.go:360] Setting OutFile to fd 1 ...
I0926 22:49:13.396217   19620 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:49:13.396226   19620 out.go:374] Setting ErrFile to fd 2...
I0926 22:49:13.396230   19620 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:49:13.396415   19620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
I0926 22:49:13.396667   19620 mustload.go:65] Loading cluster: functional-615476
I0926 22:49:13.397024   19620 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:49:13.397377   19620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0926 22:49:13.397429   19620 main.go:141] libmachine: Launching plugin server for driver kvm2
I0926 22:49:13.411155   19620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46185
I0926 22:49:13.411604   19620 main.go:141] libmachine: () Calling .GetVersion
I0926 22:49:13.412247   19620 main.go:141] libmachine: Using API Version  1
I0926 22:49:13.412272   19620 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 22:49:13.412684   19620 main.go:141] libmachine: () Calling .GetMachineName
I0926 22:49:13.412904   19620 main.go:141] libmachine: (functional-615476) Calling .GetState
I0926 22:49:13.414467   19620 host.go:66] Checking if "functional-615476" exists ...
I0926 22:49:13.414762   19620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0926 22:49:13.414798   19620 main.go:141] libmachine: Launching plugin server for driver kvm2
I0926 22:49:13.428537   19620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
I0926 22:49:13.428993   19620 main.go:141] libmachine: () Calling .GetVersion
I0926 22:49:13.429472   19620 main.go:141] libmachine: Using API Version  1
I0926 22:49:13.429494   19620 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 22:49:13.429860   19620 main.go:141] libmachine: () Calling .GetMachineName
I0926 22:49:13.430031   19620 main.go:141] libmachine: (functional-615476) Calling .DriverName
I0926 22:49:13.430162   19620 api_server.go:166] Checking apiserver status ...
I0926 22:49:13.430211   19620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0926 22:49:13.430237   19620 main.go:141] libmachine: (functional-615476) Calling .GetSSHHostname
I0926 22:49:13.433529   19620 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined MAC address 52:54:00:a0:99:7e in network mk-functional-615476
I0926 22:49:13.433944   19620 main.go:141] libmachine: (functional-615476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:7e", ip: ""} in network mk-functional-615476: {Iface:virbr1 ExpiryTime:2025-09-26 23:44:07 +0000 UTC Type:0 Mac:52:54:00:a0:99:7e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:functional-615476 Clientid:01:52:54:00:a0:99:7e}
I0926 22:49:13.433987   19620 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined IP address 192.168.39.253 and MAC address 52:54:00:a0:99:7e in network mk-functional-615476
I0926 22:49:13.434152   19620 main.go:141] libmachine: (functional-615476) Calling .GetSSHPort
I0926 22:49:13.434327   19620 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
I0926 22:49:13.434517   19620 main.go:141] libmachine: (functional-615476) Calling .GetSSHUsername
I0926 22:49:13.434670   19620 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/functional-615476/id_rsa Username:docker}
I0926 22:49:13.538402   19620 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6163/cgroup
W0926 22:49:13.552595   19620 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6163/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0926 22:49:13.552684   19620 ssh_runner.go:195] Run: ls
I0926 22:49:13.558093   19620 api_server.go:253] Checking apiserver healthz at https://192.168.39.253:8441/healthz ...
I0926 22:49:13.568107   19620 api_server.go:279] https://192.168.39.253:8441/healthz returned 200:
ok
W0926 22:49:13.568153   19620 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0926 22:49:13.568299   19620 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:49:13.568314   19620 addons.go:69] Setting dashboard=true in profile "functional-615476"
I0926 22:49:13.568323   19620 addons.go:238] Setting addon dashboard=true in "functional-615476"
I0926 22:49:13.568346   19620 host.go:66] Checking if "functional-615476" exists ...
I0926 22:49:13.568593   19620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0926 22:49:13.568636   19620 main.go:141] libmachine: Launching plugin server for driver kvm2
I0926 22:49:13.582424   19620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
I0926 22:49:13.582910   19620 main.go:141] libmachine: () Calling .GetVersion
I0926 22:49:13.583391   19620 main.go:141] libmachine: Using API Version  1
I0926 22:49:13.583438   19620 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 22:49:13.583854   19620 main.go:141] libmachine: () Calling .GetMachineName
I0926 22:49:13.584294   19620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0926 22:49:13.584329   19620 main.go:141] libmachine: Launching plugin server for driver kvm2
I0926 22:49:13.597533   19620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36061
I0926 22:49:13.598011   19620 main.go:141] libmachine: () Calling .GetVersion
I0926 22:49:13.598528   19620 main.go:141] libmachine: Using API Version  1
I0926 22:49:13.598550   19620 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 22:49:13.598920   19620 main.go:141] libmachine: () Calling .GetMachineName
I0926 22:49:13.599138   19620 main.go:141] libmachine: (functional-615476) Calling .GetState
I0926 22:49:13.601085   19620 main.go:141] libmachine: (functional-615476) Calling .DriverName
I0926 22:49:13.603564   19620 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0926 22:49:13.604890   19620 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0926 22:49:13.606217   19620 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0926 22:49:13.606239   19620 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0926 22:49:13.606262   19620 main.go:141] libmachine: (functional-615476) Calling .GetSSHHostname
I0926 22:49:13.609445   19620 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined MAC address 52:54:00:a0:99:7e in network mk-functional-615476
I0926 22:49:13.610043   19620 main.go:141] libmachine: (functional-615476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:7e", ip: ""} in network mk-functional-615476: {Iface:virbr1 ExpiryTime:2025-09-26 23:44:07 +0000 UTC Type:0 Mac:52:54:00:a0:99:7e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:functional-615476 Clientid:01:52:54:00:a0:99:7e}
I0926 22:49:13.610075   19620 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined IP address 192.168.39.253 and MAC address 52:54:00:a0:99:7e in network mk-functional-615476
I0926 22:49:13.610278   19620 main.go:141] libmachine: (functional-615476) Calling .GetSSHPort
I0926 22:49:13.610494   19620 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
I0926 22:49:13.610654   19620 main.go:141] libmachine: (functional-615476) Calling .GetSSHUsername
I0926 22:49:13.610798   19620 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/functional-615476/id_rsa Username:docker}
I0926 22:49:13.719126   19620 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0926 22:49:13.719156   19620 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0926 22:49:13.743248   19620 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0926 22:49:13.743277   19620 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0926 22:49:13.773415   19620 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0926 22:49:13.773446   19620 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0926 22:49:13.797995   19620 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0926 22:49:13.798022   19620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0926 22:49:13.835432   19620 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0926 22:49:13.835465   19620 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0926 22:49:13.859596   19620 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0926 22:49:13.859642   19620 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0926 22:49:13.884386   19620 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0926 22:49:13.884415   19620 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0926 22:49:13.908091   19620 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0926 22:49:13.908120   19620 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0926 22:49:13.934859   19620 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0926 22:49:13.934885   19620 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0926 22:49:13.959279   19620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0926 22:49:14.794783   19620 main.go:141] libmachine: Making call to close driver server
I0926 22:49:14.794816   19620 main.go:141] libmachine: (functional-615476) Calling .Close
I0926 22:49:14.795131   19620 main.go:141] libmachine: Successfully made call to close driver server
I0926 22:49:14.795148   19620 main.go:141] libmachine: Making call to close connection to plugin binary
I0926 22:49:14.795156   19620 main.go:141] libmachine: Making call to close driver server
I0926 22:49:14.795164   19620 main.go:141] libmachine: (functional-615476) Calling .Close
I0926 22:49:14.795404   19620 main.go:141] libmachine: Successfully made call to close driver server
I0926 22:49:14.795422   19620 main.go:141] libmachine: Making call to close connection to plugin binary
I0926 22:49:14.797119   19620 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-615476 addons enable metrics-server

                                                
                                                
I0926 22:49:14.798432   19620 addons.go:201] Writing out "functional-615476" config to set dashboard=true...
W0926 22:49:14.798650   19620 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0926 22:49:14.799317   19620 kapi.go:59] client config for functional-615476: &rest.Config{Host:"https://192.168.39.253:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt", KeyFile:"/home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.key", CAFile:"/home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0926 22:49:14.799741   19620 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0926 22:49:14.799756   19620 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0926 22:49:14.799760   19620 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0926 22:49:14.799764   19620 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0926 22:49:14.799769   19620 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0926 22:49:14.814370   19620 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  19be3933-4262-4d18-8f98-d79b308669d9 954 0 2025-09-26 22:49:14 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-26 22:49:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.111.252.28,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.111.252.28],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0926 22:49:14.814554   19620 out.go:285] * Launching proxy ...
* Launching proxy ...
I0926 22:49:14.814632   19620 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-615476 proxy --port 36195]
I0926 22:49:14.814938   19620 dashboard.go:157] Waiting for kubectl to output host:port ...
I0926 22:49:14.857820   19620 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0926 22:49:14.857875   19620 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0926 22:49:14.867588   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6115bb30-4c09-4039-b72e-b3df48b9ed5e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:14 GMT]] Body:0xc000864b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316a00 TLS:<nil>}
I0926 22:49:14.867660   19620 retry.go:31] will retry after 94.174µs: Temporary Error: unexpected response code: 503
I0926 22:49:14.871991   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0636ed17-01a2-455d-828a-9daaf31d9917] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:14 GMT]] Body:0xc0013e4140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf400 TLS:<nil>}
I0926 22:49:14.872046   19620 retry.go:31] will retry after 117.669µs: Temporary Error: unexpected response code: 503
I0926 22:49:14.880489   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f412c496-5ef5-4392-a3f5-dac65f14fd01] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:14 GMT]] Body:0xc000864d00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317040 TLS:<nil>}
I0926 22:49:14.880557   19620 retry.go:31] will retry after 156.215µs: Temporary Error: unexpected response code: 503
I0926 22:49:14.887307   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d1f28592-5f3a-4320-8a12-2ec627f3b803] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:14 GMT]] Body:0xc0013e4200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf7c0 TLS:<nil>}
I0926 22:49:14.887377   19620 retry.go:31] will retry after 386.288µs: Temporary Error: unexpected response code: 503
I0926 22:49:14.893744   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[99ef4b5b-d62c-4630-93d2-a787cc9bb95e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:14 GMT]] Body:0xc0015b8040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317180 TLS:<nil>}
I0926 22:49:14.893798   19620 retry.go:31] will retry after 752.659µs: Temporary Error: unexpected response code: 503
I0926 22:49:14.899306   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[768c7b13-a7f2-4fcc-a9da-3a46d74f3d24] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:14 GMT]] Body:0xc0013e4300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031b040 TLS:<nil>}
I0926 22:49:14.899360   19620 retry.go:31] will retry after 1.088121ms: Temporary Error: unexpected response code: 503
I0926 22:49:14.903172   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0c3b2a78-9304-4d43-aa2b-f280b8ae4d0f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:14 GMT]] Body:0xc0008656c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003172c0 TLS:<nil>}
I0926 22:49:14.903234   19620 retry.go:31] will retry after 1.144097ms: Temporary Error: unexpected response code: 503
I0926 22:49:14.909323   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fec5af56-15a8-4445-a2c6-20438359ae9c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:14 GMT]] Body:0xc0013e4400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf900 TLS:<nil>}
I0926 22:49:14.909396   19620 retry.go:31] will retry after 2.224248ms: Temporary Error: unexpected response code: 503
I0926 22:49:14.915950   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dfd0f8f3-4ab8-402c-b7e2-d36cd6edf7c6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:14 GMT]] Body:0xc0008657c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317400 TLS:<nil>}
I0926 22:49:14.916009   19620 retry.go:31] will retry after 2.98753ms: Temporary Error: unexpected response code: 503
I0926 22:49:14.921796   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a1942b81-509e-4396-b1e3-85f7a575f208] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:14 GMT]] Body:0xc0015b8100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bfa40 TLS:<nil>}
I0926 22:49:14.921876   19620 retry.go:31] will retry after 5.282162ms: Temporary Error: unexpected response code: 503
I0926 22:49:14.931621   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5fcb0a97-6809-4ab1-8fd0-ae656970461b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:14 GMT]] Body:0xc0013e4500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031b2c0 TLS:<nil>}
I0926 22:49:14.931670   19620 retry.go:31] will retry after 2.883418ms: Temporary Error: unexpected response code: 503
I0926 22:49:14.943794   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c331fc61-135d-49b7-af9b-82046d36b170] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:14 GMT]] Body:0xc0015b8200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317540 TLS:<nil>}
I0926 22:49:14.943863   19620 retry.go:31] will retry after 10.259007ms: Temporary Error: unexpected response code: 503
I0926 22:49:14.962400   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[22063f61-b793-4efd-9158-b74720d5a10d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:14 GMT]] Body:0xc000865a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031b680 TLS:<nil>}
I0926 22:49:14.962481   19620 retry.go:31] will retry after 15.473575ms: Temporary Error: unexpected response code: 503
I0926 22:49:14.984268   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7c16bf42-f209-4282-84b9-8ba78fcc79f9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:14 GMT]] Body:0xc000865bc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bfb80 TLS:<nil>}
I0926 22:49:14.984327   19620 retry.go:31] will retry after 27.724164ms: Temporary Error: unexpected response code: 503
I0926 22:49:15.016179   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b6ca644d-952e-462c-85b5-5341a132d2a2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:15 GMT]] Body:0xc0013e4600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bfcc0 TLS:<nil>}
I0926 22:49:15.016255   19620 retry.go:31] will retry after 29.78707ms: Temporary Error: unexpected response code: 503
I0926 22:49:15.052296   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2a871b77-49d1-45df-abed-2cd95799f4e1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:15 GMT]] Body:0xc000865d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317680 TLS:<nil>}
I0926 22:49:15.052368   19620 retry.go:31] will retry after 39.064323ms: Temporary Error: unexpected response code: 503
I0926 22:49:15.098268   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[309b326d-4d84-4fff-b6d0-55326b9e3eda] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:15 GMT]] Body:0xc0013e4700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b7e00 TLS:<nil>}
I0926 22:49:15.098329   19620 retry.go:31] will retry after 58.737157ms: Temporary Error: unexpected response code: 503
I0926 22:49:15.162337   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ae498612-ebbd-4e6d-a325-c1afd76971c3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:15 GMT]] Body:0xc0015b8340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003177c0 TLS:<nil>}
I0926 22:49:15.162392   19620 retry.go:31] will retry after 69.444591ms: Temporary Error: unexpected response code: 503
I0926 22:49:15.237176   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b000b560-e34d-4e81-b73d-6c01479fa363] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:15 GMT]] Body:0xc000865e40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031b7c0 TLS:<nil>}
I0926 22:49:15.237238   19620 retry.go:31] will retry after 175.568689ms: Temporary Error: unexpected response code: 503
I0926 22:49:15.417657   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e5909e7d-5f55-4266-9533-b8a5ae7abd08] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:15 GMT]] Body:0xc0015b8440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00164c000 TLS:<nil>}
I0926 22:49:15.417718   19620 retry.go:31] will retry after 119.92173ms: Temporary Error: unexpected response code: 503
I0926 22:49:15.541367   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[15ad3759-490a-4653-a213-f746eae02ec8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:15 GMT]] Body:0xc0013e4800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031b900 TLS:<nil>}
I0926 22:49:15.541439   19620 retry.go:31] will retry after 432.451202ms: Temporary Error: unexpected response code: 503
I0926 22:49:15.976982   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[46b179a0-6dcf-47c4-9253-d291a6b11dee] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:15 GMT]] Body:0xc0013e48c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317900 TLS:<nil>}
I0926 22:49:15.977046   19620 retry.go:31] will retry after 677.009315ms: Temporary Error: unexpected response code: 503
I0926 22:49:16.658197   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0ef4c4bb-d71c-4b51-b52f-a26e55cba8ce] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:16 GMT]] Body:0xc00165e040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317cc0 TLS:<nil>}
I0926 22:49:16.658264   19620 retry.go:31] will retry after 839.505472ms: Temporary Error: unexpected response code: 503
I0926 22:49:17.501215   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e7e8b9a8-3a1f-4d61-8370-62927ba9663e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:17 GMT]] Body:0xc0015b8540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00164c140 TLS:<nil>}
I0926 22:49:17.501281   19620 retry.go:31] will retry after 1.235448806s: Temporary Error: unexpected response code: 503
I0926 22:49:18.741353   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[06a2857a-2a13-48f2-a738-6fba714c97e2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:18 GMT]] Body:0xc00165e140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001696000 TLS:<nil>}
I0926 22:49:18.741423   19620 retry.go:31] will retry after 1.259735s: Temporary Error: unexpected response code: 503
I0926 22:49:20.004981   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8edac4bc-85d3-4c06-a925-df3944cd775c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:19 GMT]] Body:0xc0015b85c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00164c280 TLS:<nil>}
I0926 22:49:20.005043   19620 retry.go:31] will retry after 2.154355861s: Temporary Error: unexpected response code: 503
I0926 22:49:22.165591   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3ad3117e-eece-4f64-a1a1-6a55585fb810] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:22 GMT]] Body:0xc00165e240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031ba40 TLS:<nil>}
I0926 22:49:22.165667   19620 retry.go:31] will retry after 2.645719233s: Temporary Error: unexpected response code: 503
I0926 22:49:24.815796   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3395cfee-e404-4709-8ba2-1fddffba3d0b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:24 GMT]] Body:0xc0015b8740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031bb80 TLS:<nil>}
I0926 22:49:24.815864   19620 retry.go:31] will retry after 4.556652208s: Temporary Error: unexpected response code: 503
I0926 22:49:29.376508   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4d0f4d97-4e0a-47eb-b935-4409fd9b16cb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:29 GMT]] Body:0xc0015b8840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031bcc0 TLS:<nil>}
I0926 22:49:29.376567   19620 retry.go:31] will retry after 9.621446194s: Temporary Error: unexpected response code: 503
I0926 22:49:39.001946   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2b5c71ca-2045-41fb-ac87-ba0ed1035775] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:38 GMT]] Body:0xc0015b88c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001696140 TLS:<nil>}
I0926 22:49:39.002004   19620 retry.go:31] will retry after 9.535594664s: Temporary Error: unexpected response code: 503
I0926 22:49:48.541518   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4f8631d5-a493-4466-a0db-0ef0e93c6fdb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:49:48 GMT]] Body:0xc0015b8980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031be00 TLS:<nil>}
I0926 22:49:48.541600   19620 retry.go:31] will retry after 27.759927888s: Temporary Error: unexpected response code: 503
I0926 22:50:16.305865   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[39df0507-8782-4aa0-81e8-11df9331af1f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:50:16 GMT]] Body:0xc00165e300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001696280 TLS:<nil>}
I0926 22:50:16.305923   19620 retry.go:31] will retry after 38.228911511s: Temporary Error: unexpected response code: 503
I0926 22:50:54.538956   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ed179465-c041-4284-9be0-bd72335310df] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:50:54 GMT]] Body:0xc00165e3c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015f2000 TLS:<nil>}
I0926 22:50:54.539024   19620 retry.go:31] will retry after 40.060229067s: Temporary Error: unexpected response code: 503
I0926 22:51:34.603653   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fe661b61-8d15-40db-9dc5-6ac08f4b760e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:51:34 GMT]] Body:0xc000a90080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00164c3c0 TLS:<nil>}
I0926 22:51:34.603727   19620 retry.go:31] will retry after 43.250978087s: Temporary Error: unexpected response code: 503
I0926 22:52:17.860577   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3969f835-74e0-47b4-b741-270a0dd24faa] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:52:17 GMT]] Body:0xc000a90180 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00164c500 TLS:<nil>}
I0926 22:52:17.860641   19620 retry.go:31] will retry after 50.382833141s: Temporary Error: unexpected response code: 503
I0926 22:53:08.251457   19620 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[795fc605-d04d-492a-ba5c-42f9a7c40b90] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:53:08 GMT]] Body:0xc000a904c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000522000 TLS:<nil>}
I0926 22:53:08.251544   19620 retry.go:31] will retry after 1m23.500501096s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-615476 -n functional-615476
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-615476 logs -n 25: (1.577477244s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-615476 image load --daemon kicbase/echo-server:functional-615476 --alsologtostderr                                                                │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image ls                                                                                                                                   │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image load --daemon kicbase/echo-server:functional-615476 --alsologtostderr                                                                │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image ls                                                                                                                                   │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image load --daemon kicbase/echo-server:functional-615476 --alsologtostderr                                                                │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image ls                                                                                                                                   │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image save kicbase/echo-server:functional-615476 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image rm kicbase/echo-server:functional-615476 --alsologtostderr                                                                           │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image ls                                                                                                                                   │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image ls                                                                                                                                   │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image save --daemon kicbase/echo-server:functional-615476 --alsologtostderr                                                                │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ start          │ -p functional-615476 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                          │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ start          │ -p functional-615476 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                          │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ start          │ -p functional-615476 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                    │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ update-context │ functional-615476 update-context --alsologtostderr -v=2                                                                                                      │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ update-context │ functional-615476 update-context --alsologtostderr -v=2                                                                                                      │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ update-context │ functional-615476 update-context --alsologtostderr -v=2                                                                                                      │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image ls --format short --alsologtostderr                                                                                                  │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image ls --format yaml --alsologtostderr                                                                                                   │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ ssh            │ functional-615476 ssh pgrep buildkitd                                                                                                                        │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ image          │ functional-615476 image build -t localhost/my-image:functional-615476 testdata/build --alsologtostderr                                                       │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image ls                                                                                                                                   │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image ls --format json --alsologtostderr                                                                                                   │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image ls --format table --alsologtostderr                                                                                                  │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:53:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:53:09.442287   21082 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:53:09.442378   21082 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:53:09.442383   21082 out.go:374] Setting ErrFile to fd 2...
	I0926 22:53:09.442390   21082 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:53:09.442587   21082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	I0926 22:53:09.443043   21082 out.go:368] Setting JSON to false
	I0926 22:53:09.443913   21082 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2134,"bootTime":1758925055,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:53:09.444002   21082 start.go:140] virtualization: kvm guest
	I0926 22:53:09.445752   21082 out.go:179] * [functional-615476] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:53:09.447205   21082 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:53:09.447209   21082 notify.go:220] Checking for updates...
	I0926 22:53:09.449890   21082 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:53:09.451124   21082 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 22:53:09.452259   21082 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 22:53:09.453425   21082 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:53:09.454636   21082 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:53:09.456284   21082 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:53:09.456645   21082 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:53:09.456717   21082 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:53:09.473316   21082 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41975
	I0926 22:53:09.473822   21082 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:53:09.474359   21082 main.go:141] libmachine: Using API Version  1
	I0926 22:53:09.474381   21082 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:53:09.474731   21082 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:53:09.474959   21082 main.go:141] libmachine: (functional-615476) Calling .DriverName
	I0926 22:53:09.475258   21082 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:53:09.475653   21082 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:53:09.475691   21082 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:53:09.489228   21082 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34995
	I0926 22:53:09.489717   21082 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:53:09.490212   21082 main.go:141] libmachine: Using API Version  1
	I0926 22:53:09.490243   21082 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:53:09.490680   21082 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:53:09.490902   21082 main.go:141] libmachine: (functional-615476) Calling .DriverName
	I0926 22:53:09.521347   21082 out.go:179] * Using the kvm2 driver based on existing profile
	I0926 22:53:09.522767   21082 start.go:304] selected driver: kvm2
	I0926 22:53:09.522786   21082 start.go:924] validating driver "kvm2" against &{Name:functional-615476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:functional-615476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:53:09.522920   21082 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:53:09.523766   21082 cni.go:84] Creating CNI manager for ""
	I0926 22:53:09.523812   21082 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 22:53:09.523886   21082 start.go:348] cluster config:
	{Name:functional-615476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-615476 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:53:09.525375   21082 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.257530364Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:34a1533455f86164a58611e10f8817018e3fa2713cec38270b3bc2e493d1e4bd,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-855c9754f9-6c4r4,Uid:6619b3e4-f651-4584-b39b-f475dd22d0a4,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758926954991194498,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-6c4r4,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6619b3e4-f651-4584-b39b-f475dd22d0a4,k8s-app: kubernetes-dashboard,pod-template-hash: 855c9754f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:49:14.640131632Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0036a156076277230c46617fc928bfd2722ab30a63e15453d84ac5ce37e92056,Metadata:&PodSandboxMetadata{Name
:dashboard-metrics-scraper-77bf4d6c4c-qrvbk,Uid:032d7cb7-7589-4435-9d70-2e690753035c,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758926954932415552,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c4c-qrvbk,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 032d7cb7-7589-4435-9d70-2e690753035c,k8s-app: dashboard-metrics-scraper,pod-template-hash: 77bf4d6c4c,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:49:14.611734226Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:31ebf7daf7ee28f21f54ec895cd865f9602998df164c23bec9f8f3bd6efa60c2,Metadata:&PodSandboxMetadata{Name:busybox-mount,Uid:857b4229-9648-4b45-804e-37c86a2a4dc0,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1758926835704952169,Labels:map[string]string{integration-test: busybox-mount,io.kubernetes.container.name: POD,i
o.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 857b4229-9648-4b45-804e-37c86a2a4dc0,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:47:15.378724813Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:99ebfaf26c4958796d571f0ded8565769f22c5de050de7a22fc4b96a66ed4745,Metadata:&PodSandboxMetadata{Name:sp-pod,Uid:98ebeeb7-6702-49e0-ac46-af33f5ceabfe,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758926822846150388,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 98ebeeb7-6702-49e0-ac46-af33f5ceabfe,test: storage-provisioner,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"test\":\"storage-provisioner\"},\"name\":\"sp-pod\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"docke
r.io/nginx\",\"name\":\"myfrontend\",\"volumeMounts\":[{\"mountPath\":\"/tmp/mount\",\"name\":\"mypd\"}]}],\"volumes\":[{\"name\":\"mypd\",\"persistentVolumeClaim\":{\"claimName\":\"myclaim\"}}]}}\n,kubernetes.io/config.seen: 2025-09-26T22:46:59.456615580Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1387d6593f4482824f141af7d65ad1b5e9f4b25a4298052826bd8762f9cc3fae,Metadata:&PodSandboxMetadata{Name:hello-node-75c85bcc94-wvdjw,Uid:308a2350-8572-448a-aaa7-72edfa592090,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758926813022773644,Labels:map[string]string{app: hello-node,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-75c85bcc94-wvdjw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 308a2350-8572-448a-aaa7-72edfa592090,pod-template-hash: 75c85bcc94,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:46:52.696730629Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce5e5801792e49ce437b41054b22f0a887fc0bd
750ae3dad6883bf57be672e5b,Metadata:&PodSandboxMetadata{Name:hello-node-connect-7d85dfc575-vspp8,Uid:546709eb-f190-4013-8e4d-8441a5701947,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758926812901691981,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-vspp8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 546709eb-f190-4013-8e4d-8441a5701947,pod-template-hash: 7d85dfc575,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:46:52.546205155Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8416b93fe743eb3afafbafdceb3deef040cd1015848a8460cd1e48d5cd0d1a6b,Metadata:&PodSandboxMetadata{Name:mysql-5bb876957f-9dftf,Uid:e6766175-9c7c-4531-8f20-a12f26e25a36,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758926811392301457,Labels:map[string]string{app: mysql,io.kubernetes.container.name: POD,io.kubernetes.pod.name: mysql-5bb876957f-9dftf,io.kubernetes.pod.n
amespace: default,io.kubernetes.pod.uid: e6766175-9c7c-4531-8f20-a12f26e25a36,pod-template-hash: 5bb876957f,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:46:51.065700945Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ef40c7993e34fd7da6143d58b5beb0b492b56c74ecf1148761be49fc38fda440,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-v7vd6,Uid:fee94ace-f9a5-4681-a86a-01d8b513d998,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1758926786865688212,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-v7vd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee94ace-f9a5-4681-a86a-01d8b513d998,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:46:26.319377099Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a0e0c091dad72cd79ec946c0823b621206c8de361ff116982915161dd9898fd1,Metadata:&PodSandboxMetadata{
Name:storage-provisioner,Uid:c670ee02-4ecb-4f17-b779-1a64005c4259,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1758926786672606267,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c670ee02-4ecb-4f17-b779-1a64005c4259,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"host
Network\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-09-26T22:46:26.319376028Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4cb0d48da2119607a250cd4b658844ad2bb100cf221d01a8623ebe3ba554a932,Metadata:&PodSandboxMetadata{Name:kube-proxy-k6bl8,Uid:37d8ee67-d205-47e3-8b92-0c9f65478a89,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1758926786666345651,Labels:map[string]string{controller-revision-hash: 6f475c7966,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-k6bl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d8ee67-d205-47e3-8b92-0c9f65478a89,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:46:26.319373212Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f5146800c1a96e0d6d8dc094cb50f144cafaf290aa24c34751975754131
593d3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-615476,Uid:ea4d4941d03a88b7a16ab5be7b589633,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1758926781943684036,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4d4941d03a88b7a16ab5be7b589633,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ea4d4941d03a88b7a16ab5be7b589633,kubernetes.io/config.seen: 2025-09-26T22:46:21.323619370Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d85b34d7d4a98b4f44d78941088bcd8fceb56871925f487141dfb9ceed06f57e,Metadata:&PodSandboxMetadata{Name:etcd-functional-615476,Uid:dae760326ef99aa8663cb2343716dfa8,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1758926781865136975,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-funct
ional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae760326ef99aa8663cb2343716dfa8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.253:2379,kubernetes.io/config.hash: dae760326ef99aa8663cb2343716dfa8,kubernetes.io/config.seen: 2025-09-26T22:46:21.323620440Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b818a54650972e892c0a5c23d2a3039ab734a5ad75e57c0d960fd81d31a0c081,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-615476,Uid:ee542dbc7a21c027f2bed47e1ae4a1cc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758926781858696723,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee542dbc7a21c027f2bed47e1ae4a1cc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-addres
s.endpoint: 192.168.39.253:8441,kubernetes.io/config.hash: ee542dbc7a21c027f2bed47e1ae4a1cc,kubernetes.io/config.seen: 2025-09-26T22:46:21.323621555Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d167b94dbfc3d1d330e2e835305326532397e015471c1c66784cfed6af12e0c8,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-615476,Uid:ae66e2d87889a042120fb5a5d085e38f,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1758926781850364210,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae66e2d87889a042120fb5a5d085e38f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ae66e2d87889a042120fb5a5d085e38f,kubernetes.io/config.seen: 2025-09-26T22:46:21.323615562Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c266b29dcb2814b1d471a28ba0f4532ac983e4ea0f702ce
f62c2d368a95b91cc,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-v7vd6,Uid:fee94ace-f9a5-4681-a86a-01d8b513d998,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1758926723438597794,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-v7vd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee94ace-f9a5-4681-a86a-01d8b513d998,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T22:44:35.863279230Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e0500a9b3f70d0c05e9af787563d7f05d6a56df3c77c0cb7a125bc77d89e2b4f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-615476,Uid:ae66e2d87889a042120fb5a5d085e38f,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1758926723156610590,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manag
er-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae66e2d87889a042120fb5a5d085e38f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ae66e2d87889a042120fb5a5d085e38f,kubernetes.io/config.seen: 2025-09-26T22:44:30.443321472Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f24f250692d59846b41827ff3962036709944e117846612f085b738c4abe0f5f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-615476,Uid:ea4d4941d03a88b7a16ab5be7b589633,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1758926723144151403,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4d4941d03a88b7a16ab5be7b589633,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ea4d4941d03a88b7a16ab5be7b589633,kubernetes.io/config.seen: 2025-09-26T22:44:30.443387188Z,ku
bernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:76c29541bf5066f8d2f1a447db42d573ba10f8f8f6cc3ed623bede46ecb76eaa,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c670ee02-4ecb-4f17-b779-1a64005c4259,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1758926723088543579,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c670ee02-4ecb-4f17-b779-1a64005c4259,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/
storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-09-26T22:44:37.841955684Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:45797e1be27182a5b9185ee216bbf4062d88e73c976f12fb5a8d0b395982d385,Metadata:&PodSandboxMetadata{Name:kube-proxy-k6bl8,Uid:37d8ee67-d205-47e3-8b92-0c9f65478a89,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1758926723087887723,Labels:map[string]string{controller-revision-hash: 6f475c7966,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-k6bl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d8ee67-d205-47e3-8b92-0c9f65478a89,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/conf
ig.seen: 2025-09-26T22:44:35.763827786Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:99fece67f7f5c3b026ecbbd00d306546e39853faf3f653d6dada27f6e440bbbe,Metadata:&PodSandboxMetadata{Name:etcd-functional-615476,Uid:dae760326ef99aa8663cb2343716dfa8,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1758926722987258042,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae760326ef99aa8663cb2343716dfa8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.253:2379,kubernetes.io/config.hash: dae760326ef99aa8663cb2343716dfa8,kubernetes.io/config.seen: 2025-09-26T22:44:30.443389423Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=43571159-4227-4c51-b066-ceb44eaaa72c name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.259162064Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6babe279-e70e-4872-965a-a38ab2d1d844 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.259700411Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6babe279-e70e-4872-965a-a38ab2d1d844 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.260388726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b9c309a0de8e1d33c678eae39cdd78cf313b3b25ce81b9400dc8d7d189c1ee8,PodSandboxId:31ebf7daf7ee28f21f54ec895cd865f9602998df164c23bec9f8f3bd6efa60c2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758926945770511870,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 857b4229-9648-4b45-804e-37c86a2a4dc0,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de0d368e228a92f9ebd7e7c2797777ecbd9d60705d389cd909525736f523889,PodSandboxId:8416b93fe743eb3afafbafdceb3deef040cd1015848a8460cd1e48d5cd0d1a6b,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1758926822991413480,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-9dftf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6766175-9c7c-4531-8f20-a12f26e25a36,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"cont
ainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45fe3ff46320ea32aa247f627e258123f506258da7ead57b6cebde091ee225b4,PodSandboxId:ef40c7993e34fd7da6143d58b5beb0b492b56c74ecf1148761be49fc38fda440,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758926787258139913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-v7vd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee94ace-f9a5-4681-a86a-01d8b513d998,},Annotations:map[string]string{io.kubernetes.container.hash:
e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80a006afe648c98a59b20b7fe9e36eb9506af53b4f30bfbdd37f4b8328f2507,PodSandboxId:4cb0d48da2119607a250cd4b658844ad2bb100cf221d01a8623ebe3ba554a932,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7
e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758926787040625264,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6bl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d8ee67-d205-47e3-8b92-0c9f65478a89,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7743b91a59da53d60306b364b523ea638a5b3771441525b3aa46e330d301b9cb,PodSandboxId:a0e0c091dad72cd79ec946c0823b621206c8de361ff116982915161dd9898fd1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CO
NTAINER_RUNNING,CreatedAt:1758926787044111244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c670ee02-4ecb-4f17-b779-1a64005c4259,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d848b37253f44d835d19efa2b697021ae05580b06ddc7295b52d4eacbd7f946,PodSandboxId:d85b34d7d4a98b4f44d78941088bcd8fceb56871925f487141dfb9ceed06f57e,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1
758926782190344922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae760326ef99aa8663cb2343716dfa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da811b46018d31be53a97f95a50a1bf8dedceeac33200ce9f2eed0c7fba2153,PodSandboxId:f5146800c1a96e0d6d8dc094cb50f144cafaf290aa24c34751975754131593d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e92
03e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758926782159038603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4d4941d03a88b7a16ab5be7b589633,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dec41da22d1cf768a6d75210ed3024a369aeca8cfebc582996501bcbf621994,PodSandboxId:b818a54650972e892c0a5c23d2a3039ab734a5ad75e57c0d960fd81d31a0c081,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be130888
4b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758926782126439578,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee542dbc7a21c027f2bed47e1ae4a1cc,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8acd9802e7a2d71c6ed24901c7c3732d42a3e0850f4e3ce6e3236b92363afdd1,PodSandboxId:d167b94dbfc3d1d330e2e835305326532397e015471c1c66784cfed6af12e0c8,Metadata:&ContainerMe
tadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758926782037325880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae66e2d87889a042120fb5a5d085e38f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8f1f9dd1b05ab7eb86c295da8
04c9125c5974695d3102c23d5f5ce56764b27,PodSandboxId:45797e1be27182a5b9185ee216bbf4062d88e73c976f12fb5a8d0b395982d385,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758926742399134744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6bl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d8ee67-d205-47e3-8b92-0c9f65478a89,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e97af5255d2cf93fa9ba2d026e82ac01c1cfd9a81c9fa3e51ca579381e11d0
fc,PodSandboxId:76c29541bf5066f8d2f1a447db42d573ba10f8f8f6cc3ed623bede46ecb76eaa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758926742389888897,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c670ee02-4ecb-4f17-b779-1a64005c4259,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e43212efa032d59d887115cb52dfa89966099ef1da46f83cc8987c164267fd18,PodSandbox
Id:e0500a9b3f70d0c05e9af787563d7f05d6a56df3c77c0cb7a125bc77d89e2b4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758926738598716217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae66e2d87889a042120fb5a5d085e38f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49d09c8b58317559bcfb21e8180cfd2763a46ae267b78e3e4b5678a35e180e2,PodSandboxId:99fece67f7f5c3b026ecbbd00d306546e39853faf3f653d6dada27f6e440bbbe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758926738594927588,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae760326ef99aa8663cb2343716dfa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62733570c49c55bded3a57d9d93052194a6a3821316c573e7ade576d15f2412c,PodSandboxId:f24f250692d59846b41827ff3962036709944e117846612f085b738c4abe0f5f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758926738585704672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4d4941d03a88b7a16ab5be7b589633,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fb203aad7a208ee61f02ad4aa37585620a273c95cadced2502ff729a2586ef7,PodSandboxId:c266b29dcb2814b1d471a28ba0f4532ac983e4ea0f702cef62c2d368a95b91cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758926724838253837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-v7vd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee94ace-f9a5-4681-a86a-01d8b513d998,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes
.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6babe279-e70e-4872-965a-a38ab2d1d844 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.283093969Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=688d62c6-250e-42ae-a25a-99b8b687d440 name=/runtime.v1.RuntimeService/Version
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.283197029Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=688d62c6-250e-42ae-a25a-99b8b687d440 name=/runtime.v1.RuntimeService/Version
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.285406147Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=61f14562-7755-4347-a835-1073b5d0b9aa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.286633212Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758927254286607603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:220510,},InodesUsed:&UInt64Value{Value:108,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61f14562-7755-4347-a835-1073b5d0b9aa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.287301888Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e30969c8-a5c2-4114-9584-cfb482c829be name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.287405594Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e30969c8-a5c2-4114-9584-cfb482c829be name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.287723205Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b9c309a0de8e1d33c678eae39cdd78cf313b3b25ce81b9400dc8d7d189c1ee8,PodSandboxId:31ebf7daf7ee28f21f54ec895cd865f9602998df164c23bec9f8f3bd6efa60c2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758926945770511870,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 857b4229-9648-4b45-804e-37c86a2a4dc0,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de0d368e228a92f9ebd7e7c2797777ecbd9d60705d389cd909525736f523889,PodSandboxId:8416b93fe743eb3afafbafdceb3deef040cd1015848a8460cd1e48d5cd0d1a6b,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1758926822991413480,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-9dftf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6766175-9c7c-4531-8f20-a12f26e25a36,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"cont
ainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45fe3ff46320ea32aa247f627e258123f506258da7ead57b6cebde091ee225b4,PodSandboxId:ef40c7993e34fd7da6143d58b5beb0b492b56c74ecf1148761be49fc38fda440,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758926787258139913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-v7vd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee94ace-f9a5-4681-a86a-01d8b513d998,},Annotations:map[string]string{io.kubernetes.container.hash:
e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80a006afe648c98a59b20b7fe9e36eb9506af53b4f30bfbdd37f4b8328f2507,PodSandboxId:4cb0d48da2119607a250cd4b658844ad2bb100cf221d01a8623ebe3ba554a932,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7
e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758926787040625264,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6bl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d8ee67-d205-47e3-8b92-0c9f65478a89,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7743b91a59da53d60306b364b523ea638a5b3771441525b3aa46e330d301b9cb,PodSandboxId:a0e0c091dad72cd79ec946c0823b621206c8de361ff116982915161dd9898fd1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CO
NTAINER_RUNNING,CreatedAt:1758926787044111244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c670ee02-4ecb-4f17-b779-1a64005c4259,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d848b37253f44d835d19efa2b697021ae05580b06ddc7295b52d4eacbd7f946,PodSandboxId:d85b34d7d4a98b4f44d78941088bcd8fceb56871925f487141dfb9ceed06f57e,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1
758926782190344922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae760326ef99aa8663cb2343716dfa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da811b46018d31be53a97f95a50a1bf8dedceeac33200ce9f2eed0c7fba2153,PodSandboxId:f5146800c1a96e0d6d8dc094cb50f144cafaf290aa24c34751975754131593d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e92
03e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758926782159038603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4d4941d03a88b7a16ab5be7b589633,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dec41da22d1cf768a6d75210ed3024a369aeca8cfebc582996501bcbf621994,PodSandboxId:b818a54650972e892c0a5c23d2a3039ab734a5ad75e57c0d960fd81d31a0c081,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be130888
4b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758926782126439578,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee542dbc7a21c027f2bed47e1ae4a1cc,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8acd9802e7a2d71c6ed24901c7c3732d42a3e0850f4e3ce6e3236b92363afdd1,PodSandboxId:d167b94dbfc3d1d330e2e835305326532397e015471c1c66784cfed6af12e0c8,Metadata:&ContainerMe
tadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758926782037325880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae66e2d87889a042120fb5a5d085e38f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8f1f9dd1b05ab7eb86c295da8
04c9125c5974695d3102c23d5f5ce56764b27,PodSandboxId:45797e1be27182a5b9185ee216bbf4062d88e73c976f12fb5a8d0b395982d385,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758926742399134744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6bl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d8ee67-d205-47e3-8b92-0c9f65478a89,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e97af5255d2cf93fa9ba2d026e82ac01c1cfd9a81c9fa3e51ca579381e11d0
fc,PodSandboxId:76c29541bf5066f8d2f1a447db42d573ba10f8f8f6cc3ed623bede46ecb76eaa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758926742389888897,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c670ee02-4ecb-4f17-b779-1a64005c4259,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e43212efa032d59d887115cb52dfa89966099ef1da46f83cc8987c164267fd18,PodSandbox
Id:e0500a9b3f70d0c05e9af787563d7f05d6a56df3c77c0cb7a125bc77d89e2b4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758926738598716217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae66e2d87889a042120fb5a5d085e38f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49d09c8b58317559bcfb21e8180cfd2763a46ae267b78e3e4b5678a35e180e2,PodSandboxId:99fece67f7f5c3b026ecbbd00d306546e39853faf3f653d6dada27f6e440bbbe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758926738594927588,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae760326ef99aa8663cb2343716dfa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62733570c49c55bded3a57d9d93052194a6a3821316c573e7ade576d15f2412c,PodSandboxId:f24f250692d59846b41827ff3962036709944e117846612f085b738c4abe0f5f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758926738585704672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4d4941d03a88b7a16ab5be7b589633,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fb203aad7a208ee61f02ad4aa37585620a273c95cadced2502ff729a2586ef7,PodSandboxId:c266b29dcb2814b1d471a28ba0f4532ac983e4ea0f702cef62c2d368a95b91cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758926724838253837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-v7vd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee94ace-f9a5-4681-a86a-01d8b513d998,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes
.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e30969c8-a5c2-4114-9584-cfb482c829be name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.327765595Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=acba5780-ab5a-426d-8514-4b9bf5c3c7fb name=/runtime.v1.RuntimeService/Version
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.327872548Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=acba5780-ab5a-426d-8514-4b9bf5c3c7fb name=/runtime.v1.RuntimeService/Version
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.329508720Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=509c3dcd-782e-4b54-9519-1f0b0862bc4a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.330365057Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758927254330310437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:220510,},InodesUsed:&UInt64Value{Value:108,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=509c3dcd-782e-4b54-9519-1f0b0862bc4a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.330909460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a24f75ca-c125-4336-9256-1d199cb9c877 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.331344336Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a24f75ca-c125-4336-9256-1d199cb9c877 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.331829846Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b9c309a0de8e1d33c678eae39cdd78cf313b3b25ce81b9400dc8d7d189c1ee8,PodSandboxId:31ebf7daf7ee28f21f54ec895cd865f9602998df164c23bec9f8f3bd6efa60c2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758926945770511870,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 857b4229-9648-4b45-804e-37c86a2a4dc0,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de0d368e228a92f9ebd7e7c2797777ecbd9d60705d389cd909525736f523889,PodSandboxId:8416b93fe743eb3afafbafdceb3deef040cd1015848a8460cd1e48d5cd0d1a6b,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1758926822991413480,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-9dftf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6766175-9c7c-4531-8f20-a12f26e25a36,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"cont
ainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45fe3ff46320ea32aa247f627e258123f506258da7ead57b6cebde091ee225b4,PodSandboxId:ef40c7993e34fd7da6143d58b5beb0b492b56c74ecf1148761be49fc38fda440,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758926787258139913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-v7vd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee94ace-f9a5-4681-a86a-01d8b513d998,},Annotations:map[string]string{io.kubernetes.container.hash:
e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80a006afe648c98a59b20b7fe9e36eb9506af53b4f30bfbdd37f4b8328f2507,PodSandboxId:4cb0d48da2119607a250cd4b658844ad2bb100cf221d01a8623ebe3ba554a932,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7
e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758926787040625264,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6bl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d8ee67-d205-47e3-8b92-0c9f65478a89,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7743b91a59da53d60306b364b523ea638a5b3771441525b3aa46e330d301b9cb,PodSandboxId:a0e0c091dad72cd79ec946c0823b621206c8de361ff116982915161dd9898fd1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CO
NTAINER_RUNNING,CreatedAt:1758926787044111244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c670ee02-4ecb-4f17-b779-1a64005c4259,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d848b37253f44d835d19efa2b697021ae05580b06ddc7295b52d4eacbd7f946,PodSandboxId:d85b34d7d4a98b4f44d78941088bcd8fceb56871925f487141dfb9ceed06f57e,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1
758926782190344922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae760326ef99aa8663cb2343716dfa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da811b46018d31be53a97f95a50a1bf8dedceeac33200ce9f2eed0c7fba2153,PodSandboxId:f5146800c1a96e0d6d8dc094cb50f144cafaf290aa24c34751975754131593d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e92
03e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758926782159038603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4d4941d03a88b7a16ab5be7b589633,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dec41da22d1cf768a6d75210ed3024a369aeca8cfebc582996501bcbf621994,PodSandboxId:b818a54650972e892c0a5c23d2a3039ab734a5ad75e57c0d960fd81d31a0c081,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be130888
4b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758926782126439578,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee542dbc7a21c027f2bed47e1ae4a1cc,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8acd9802e7a2d71c6ed24901c7c3732d42a3e0850f4e3ce6e3236b92363afdd1,PodSandboxId:d167b94dbfc3d1d330e2e835305326532397e015471c1c66784cfed6af12e0c8,Metadata:&ContainerMe
tadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758926782037325880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae66e2d87889a042120fb5a5d085e38f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8f1f9dd1b05ab7eb86c295da8
04c9125c5974695d3102c23d5f5ce56764b27,PodSandboxId:45797e1be27182a5b9185ee216bbf4062d88e73c976f12fb5a8d0b395982d385,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758926742399134744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6bl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d8ee67-d205-47e3-8b92-0c9f65478a89,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e97af5255d2cf93fa9ba2d026e82ac01c1cfd9a81c9fa3e51ca579381e11d0
fc,PodSandboxId:76c29541bf5066f8d2f1a447db42d573ba10f8f8f6cc3ed623bede46ecb76eaa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758926742389888897,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c670ee02-4ecb-4f17-b779-1a64005c4259,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e43212efa032d59d887115cb52dfa89966099ef1da46f83cc8987c164267fd18,PodSandbox
Id:e0500a9b3f70d0c05e9af787563d7f05d6a56df3c77c0cb7a125bc77d89e2b4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758926738598716217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae66e2d87889a042120fb5a5d085e38f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49d09c8b58317559bcfb21e8180cfd2763a46ae267b78e3e4b5678a35e180e2,PodSandboxId:99fece67f7f5c3b026ecbbd00d306546e39853faf3f653d6dada27f6e440bbbe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758926738594927588,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae760326ef99aa8663cb2343716dfa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62733570c49c55bded3a57d9d93052194a6a3821316c573e7ade576d15f2412c,PodSandboxId:f24f250692d59846b41827ff3962036709944e117846612f085b738c4abe0f5f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758926738585704672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4d4941d03a88b7a16ab5be7b589633,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fb203aad7a208ee61f02ad4aa37585620a273c95cadced2502ff729a2586ef7,PodSandboxId:c266b29dcb2814b1d471a28ba0f4532ac983e4ea0f702cef62c2d368a95b91cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758926724838253837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-v7vd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee94ace-f9a5-4681-a86a-01d8b513d998,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes
.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a24f75ca-c125-4336-9256-1d199cb9c877 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.369606788Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=737071d7-9343-4b3c-97e9-f24c031ef610 name=/runtime.v1.RuntimeService/Version
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.369701832Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=737071d7-9343-4b3c-97e9-f24c031ef610 name=/runtime.v1.RuntimeService/Version
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.371186404Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f285298a-0d6c-4e4d-94a5-10e72e125fee name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.371864233Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758927254371839083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:220510,},InodesUsed:&UInt64Value{Value:108,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f285298a-0d6c-4e4d-94a5-10e72e125fee name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.372531237Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84a1ae3f-e638-4379-ada4-301b3024fd41 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.372606510Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84a1ae3f-e638-4379-ada4-301b3024fd41 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:54:14 functional-615476 crio[5567]: time="2025-09-26 22:54:14.372888602Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b9c309a0de8e1d33c678eae39cdd78cf313b3b25ce81b9400dc8d7d189c1ee8,PodSandboxId:31ebf7daf7ee28f21f54ec895cd865f9602998df164c23bec9f8f3bd6efa60c2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758926945770511870,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 857b4229-9648-4b45-804e-37c86a2a4dc0,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de0d368e228a92f9ebd7e7c2797777ecbd9d60705d389cd909525736f523889,PodSandboxId:8416b93fe743eb3afafbafdceb3deef040cd1015848a8460cd1e48d5cd0d1a6b,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1758926822991413480,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-9dftf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6766175-9c7c-4531-8f20-a12f26e25a36,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"cont
ainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45fe3ff46320ea32aa247f627e258123f506258da7ead57b6cebde091ee225b4,PodSandboxId:ef40c7993e34fd7da6143d58b5beb0b492b56c74ecf1148761be49fc38fda440,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758926787258139913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-v7vd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee94ace-f9a5-4681-a86a-01d8b513d998,},Annotations:map[string]string{io.kubernetes.container.hash:
e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80a006afe648c98a59b20b7fe9e36eb9506af53b4f30bfbdd37f4b8328f2507,PodSandboxId:4cb0d48da2119607a250cd4b658844ad2bb100cf221d01a8623ebe3ba554a932,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7
e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758926787040625264,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6bl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d8ee67-d205-47e3-8b92-0c9f65478a89,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7743b91a59da53d60306b364b523ea638a5b3771441525b3aa46e330d301b9cb,PodSandboxId:a0e0c091dad72cd79ec946c0823b621206c8de361ff116982915161dd9898fd1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CO
NTAINER_RUNNING,CreatedAt:1758926787044111244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c670ee02-4ecb-4f17-b779-1a64005c4259,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d848b37253f44d835d19efa2b697021ae05580b06ddc7295b52d4eacbd7f946,PodSandboxId:d85b34d7d4a98b4f44d78941088bcd8fceb56871925f487141dfb9ceed06f57e,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1
758926782190344922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae760326ef99aa8663cb2343716dfa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da811b46018d31be53a97f95a50a1bf8dedceeac33200ce9f2eed0c7fba2153,PodSandboxId:f5146800c1a96e0d6d8dc094cb50f144cafaf290aa24c34751975754131593d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e92
03e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758926782159038603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4d4941d03a88b7a16ab5be7b589633,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dec41da22d1cf768a6d75210ed3024a369aeca8cfebc582996501bcbf621994,PodSandboxId:b818a54650972e892c0a5c23d2a3039ab734a5ad75e57c0d960fd81d31a0c081,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be130888
4b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758926782126439578,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee542dbc7a21c027f2bed47e1ae4a1cc,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8acd9802e7a2d71c6ed24901c7c3732d42a3e0850f4e3ce6e3236b92363afdd1,PodSandboxId:d167b94dbfc3d1d330e2e835305326532397e015471c1c66784cfed6af12e0c8,Metadata:&ContainerMe
tadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758926782037325880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae66e2d87889a042120fb5a5d085e38f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8f1f9dd1b05ab7eb86c295da8
04c9125c5974695d3102c23d5f5ce56764b27,PodSandboxId:45797e1be27182a5b9185ee216bbf4062d88e73c976f12fb5a8d0b395982d385,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758926742399134744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6bl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d8ee67-d205-47e3-8b92-0c9f65478a89,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e97af5255d2cf93fa9ba2d026e82ac01c1cfd9a81c9fa3e51ca579381e11d0
fc,PodSandboxId:76c29541bf5066f8d2f1a447db42d573ba10f8f8f6cc3ed623bede46ecb76eaa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758926742389888897,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c670ee02-4ecb-4f17-b779-1a64005c4259,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e43212efa032d59d887115cb52dfa89966099ef1da46f83cc8987c164267fd18,PodSandbox
Id:e0500a9b3f70d0c05e9af787563d7f05d6a56df3c77c0cb7a125bc77d89e2b4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758926738598716217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae66e2d87889a042120fb5a5d085e38f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49d09c8b58317559bcfb21e8180cfd2763a46ae267b78e3e4b5678a35e180e2,PodSandboxId:99fece67f7f5c3b026ecbbd00d306546e39853faf3f653d6dada27f6e440bbbe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758926738594927588,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae760326ef99aa8663cb2343716dfa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62733570c49c55bded3a57d9d93052194a6a3821316c573e7ade576d15f2412c,PodSandboxId:f24f250692d59846b41827ff3962036709944e117846612f085b738c4abe0f5f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758926738585704672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4d4941d03a88b7a16ab5be7b589633,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fb203aad7a208ee61f02ad4aa37585620a273c95cadced2502ff729a2586ef7,PodSandboxId:c266b29dcb2814b1d471a28ba0f4532ac983e4ea0f702cef62c2d368a95b91cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758926724838253837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-v7vd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee94ace-f9a5-4681-a86a-01d8b513d998,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes
.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84a1ae3f-e638-4379-ada4-301b3024fd41 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7b9c309a0de8e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   31ebf7daf7ee2       busybox-mount
	2de0d368e228a       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb       7 minutes ago       Running             mysql                     0                   8416b93fe743e       mysql-5bb876957f-9dftf
	45fe3ff46320e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Running             coredns                   2                   ef40c7993e34f       coredns-66bc5c9577-v7vd6
	7743b91a59da5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       3                   a0e0c091dad72       storage-provisioner
	a80a006afe648       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      7 minutes ago       Running             kube-proxy                3                   4cb0d48da2119       kube-proxy-k6bl8
	3d848b37253f4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      7 minutes ago       Running             etcd                      3                   d85b34d7d4a98       etcd-functional-615476
	4da811b46018d       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      7 minutes ago       Running             kube-scheduler            3                   f5146800c1a96       kube-scheduler-functional-615476
	0dec41da22d1c       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      7 minutes ago       Running             kube-apiserver            0                   b818a54650972       kube-apiserver-functional-615476
	8acd9802e7a2d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      7 minutes ago       Running             kube-controller-manager   3                   d167b94dbfc3d       kube-controller-manager-functional-615476
	dd8f1f9dd1b05       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      8 minutes ago       Exited              kube-proxy                2                   45797e1be2718       kube-proxy-k6bl8
	e97af5255d2cf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Exited              storage-provisioner       2                   76c29541bf506       storage-provisioner
	e43212efa032d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      8 minutes ago       Exited              kube-controller-manager   2                   e0500a9b3f70d       kube-controller-manager-functional-615476
	f49d09c8b5831       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      8 minutes ago       Exited              etcd                      2                   99fece67f7f5c       etcd-functional-615476
	62733570c49c5       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      8 minutes ago       Exited              kube-scheduler            2                   f24f250692d59       kube-scheduler-functional-615476
	1fb203aad7a20       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      8 minutes ago       Exited              coredns                   1                   c266b29dcb281       coredns-66bc5c9577-v7vd6
	
	
	==> coredns [1fb203aad7a208ee61f02ad4aa37585620a273c95cadced2502ff729a2586ef7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36907 - 55343 "HINFO IN 6257610588922282964.2426922464311900739. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015428862s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [45fe3ff46320ea32aa247f627e258123f506258da7ead57b6cebde091ee225b4] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34955 - 18075 "HINFO IN 4014493224373250944.4656664838483367357. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022262791s
	
	
	==> describe nodes <==
	Name:               functional-615476
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-615476
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=functional-615476
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_44_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:44:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-615476
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:54:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:53:24 +0000   Fri, 26 Sep 2025 22:44:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:53:24 +0000   Fri, 26 Sep 2025 22:44:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:53:24 +0000   Fri, 26 Sep 2025 22:44:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:53:24 +0000   Fri, 26 Sep 2025 22:44:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    functional-615476
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 c75779210dd54d8eabe01e46abd06b89
	  System UUID:                c7577921-0dd5-4d8e-abe0-1e46abd06b89
	  Boot ID:                    133b97ef-7d02-4fed-9cdb-f54cfe63448f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-wvdjw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	  default                     hello-node-connect-7d85dfc575-vspp8           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	  default                     mysql-5bb876957f-9dftf                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    7m23s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 coredns-66bc5c9577-v7vd6                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     9m39s
	  kube-system                 etcd-functional-615476                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         9m44s
	  kube-system                 kube-apiserver-functional-615476              250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m48s
	  kube-system                 kube-controller-manager-functional-615476     200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m44s
	  kube-system                 kube-proxy-k6bl8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m39s
	  kube-system                 kube-scheduler-functional-615476              100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m44s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-qrvbk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6c4r4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m36s                  kube-proxy       
	  Normal  Starting                 7m47s                  kube-proxy       
	  Normal  Starting                 8m31s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m44s                  kubelet          Node functional-615476 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  9m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    9m44s                  kubelet          Node functional-615476 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m44s                  kubelet          Node functional-615476 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m44s                  kubelet          Starting kubelet.
	  Normal  NodeReady                9m43s                  kubelet          Node functional-615476 status is now: NodeReady
	  Normal  RegisteredNode           9m40s                  node-controller  Node functional-615476 event: Registered Node functional-615476 in Controller
	  Normal  NodeHasNoDiskPressure    8m36s (x8 over 8m36s)  kubelet          Node functional-615476 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m36s (x8 over 8m36s)  kubelet          Node functional-615476 status is now: NodeHasSufficientMemory
	  Normal  Starting                 8m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     8m36s (x7 over 8m36s)  kubelet          Node functional-615476 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m29s                  node-controller  Node functional-615476 event: Registered Node functional-615476 in Controller
	  Normal  Starting                 7m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m53s (x8 over 7m53s)  kubelet          Node functional-615476 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m53s (x8 over 7m53s)  kubelet          Node functional-615476 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m53s (x7 over 7m53s)  kubelet          Node functional-615476 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m45s                  node-controller  Node functional-615476 event: Registered Node functional-615476 in Controller
	
	
	==> dmesg <==
	[  +0.006999] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.168553] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.088994] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.098731] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.132093] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.092522] kauditd_printk_skb: 18 callbacks suppressed
	[ +11.191379] kauditd_printk_skb: 249 callbacks suppressed
	[Sep26 22:45] kauditd_printk_skb: 38 callbacks suppressed
	[  +2.976415] kauditd_printk_skb: 349 callbacks suppressed
	[  +5.036591] kauditd_printk_skb: 108 callbacks suppressed
	[  +4.122420] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.593057] kauditd_printk_skb: 2 callbacks suppressed
	[Sep26 22:46] kauditd_printk_skb: 12 callbacks suppressed
	[  +1.029609] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.588088] kauditd_printk_skb: 162 callbacks suppressed
	[  +7.416315] kauditd_printk_skb: 133 callbacks suppressed
	[  +0.033557] kauditd_printk_skb: 109 callbacks suppressed
	[Sep26 22:47] kauditd_printk_skb: 98 callbacks suppressed
	[  +0.000059] kauditd_printk_skb: 47 callbacks suppressed
	[ +17.552065] kauditd_printk_skb: 26 callbacks suppressed
	[Sep26 22:49] kauditd_printk_skb: 25 callbacks suppressed
	[Sep26 22:50] kauditd_printk_skb: 74 callbacks suppressed
	[Sep26 22:53] crun[9747]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [3d848b37253f44d835d19efa2b697021ae05580b06ddc7295b52d4eacbd7f946] <==
	{"level":"warn","ts":"2025-09-26T22:46:24.844339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:46:24.856701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:46:24.929132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51538","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:46:57.126864Z","caller":"traceutil/trace.go:172","msg":"trace[1012089057] linearizableReadLoop","detail":"{readStateIndex:816; appliedIndex:816; }","duration":"119.281952ms","start":"2025-09-26T22:46:57.007555Z","end":"2025-09-26T22:46:57.126837Z","steps":["trace[1012089057] 'read index received'  (duration: 119.268607ms)","trace[1012089057] 'applied index is now lower than readState.Index'  (duration: 12.222µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:46:57.311111Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"303.47617ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/myclaim\" limit:1 ","response":"range_response_count:1 size:842"}
	{"level":"info","ts":"2025-09-26T22:46:57.311270Z","caller":"traceutil/trace.go:172","msg":"trace[235565901] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/myclaim; range_end:; response_count:1; response_revision:738; }","duration":"303.707395ms","start":"2025-09-26T22:46:57.007551Z","end":"2025-09-26T22:46:57.311258Z","steps":["trace[235565901] 'agreement among raft nodes before linearized reading'  (duration: 119.38191ms)","trace[235565901] 'range keys from in-memory index tree'  (duration: 184.004938ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:46:57.311305Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-26T22:46:57.007535Z","time spent":"303.756844ms","remote":"127.0.0.1:50680","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":864,"request content":"key:\"/registry/persistentvolumeclaims/default/myclaim\" limit:1 "}
	{"level":"warn","ts":"2025-09-26T22:46:57.312077Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.595808ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10304967707656802226 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/persistentvolumeclaims/default/myclaim\" mod_revision:738 > success:<request_put:<key:\"/registry/persistentvolumeclaims/default/myclaim\" value_size:1127 >> failure:<request_range:<key:\"/registry/persistentvolumeclaims/default/myclaim\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-26T22:46:57.312173Z","caller":"traceutil/trace.go:172","msg":"trace[2105615370] linearizableReadLoop","detail":"{readStateIndex:817; appliedIndex:816; }","duration":"185.123109ms","start":"2025-09-26T22:46:57.127018Z","end":"2025-09-26T22:46:57.312141Z","steps":["trace[2105615370] 'read index received'  (duration: 47.895µs)","trace[2105615370] 'applied index is now lower than readState.Index'  (duration: 185.07456ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:46:57.312214Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"293.881293ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:46:57.312225Z","caller":"traceutil/trace.go:172","msg":"trace[935236764] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:739; }","duration":"293.896198ms","start":"2025-09-26T22:46:57.018325Z","end":"2025-09-26T22:46:57.312221Z","steps":["trace[935236764] 'agreement among raft nodes before linearized reading'  (duration: 293.866397ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:46:57.312264Z","caller":"traceutil/trace.go:172","msg":"trace[1838719913] transaction","detail":"{read_only:false; response_revision:739; number_of_response:1; }","duration":"350.669114ms","start":"2025-09-26T22:46:56.961582Z","end":"2025-09-26T22:46:57.312251Z","steps":["trace[1838719913] 'process raft request'  (duration: 165.345483ms)","trace[1838719913] 'compare'  (duration: 184.533787ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:46:57.312335Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-26T22:46:56.961561Z","time spent":"350.73697ms","remote":"127.0.0.1:50680","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1183,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/persistentvolumeclaims/default/myclaim\" mod_revision:738 > success:<request_put:<key:\"/registry/persistentvolumeclaims/default/myclaim\" value_size:1127 >> failure:<request_range:<key:\"/registry/persistentvolumeclaims/default/myclaim\" > >"}
	{"level":"warn","ts":"2025-09-26T22:46:57.312374Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"274.830374ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.253\" limit:1 ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2025-09-26T22:46:57.312388Z","caller":"traceutil/trace.go:172","msg":"trace[1332736172] range","detail":"{range_begin:/registry/masterleases/192.168.39.253; range_end:; response_count:1; response_revision:739; }","duration":"274.845073ms","start":"2025-09-26T22:46:57.037539Z","end":"2025-09-26T22:46:57.312384Z","steps":["trace[1332736172] 'agreement among raft nodes before linearized reading'  (duration: 274.785102ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:46:59.058206Z","caller":"traceutil/trace.go:172","msg":"trace[1509418752] transaction","detail":"{read_only:false; response_revision:749; number_of_response:1; }","duration":"205.612528ms","start":"2025-09-26T22:46:58.852572Z","end":"2025-09-26T22:46:59.058185Z","steps":["trace[1509418752] 'process raft request'  (duration: 205.489932ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:47:02.651609Z","caller":"traceutil/trace.go:172","msg":"trace[2020070850] linearizableReadLoop","detail":"{readStateIndex:833; appliedIndex:833; }","duration":"199.346576ms","start":"2025-09-26T22:47:02.452246Z","end":"2025-09-26T22:47:02.651592Z","steps":["trace[2020070850] 'read index received'  (duration: 199.342017ms)","trace[2020070850] 'applied index is now lower than readState.Index'  (duration: 3.873µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:47:02.715486Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"263.186548ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:47:02.715801Z","caller":"traceutil/trace.go:172","msg":"trace[1310166365] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:754; }","duration":"263.545141ms","start":"2025-09-26T22:47:02.452242Z","end":"2025-09-26T22:47:02.715787Z","steps":["trace[1310166365] 'agreement among raft nodes before linearized reading'  (duration: 199.41842ms)","trace[1310166365] 'range keys from in-memory index tree'  (duration: 63.750113ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-26T22:47:04.700455Z","caller":"traceutil/trace.go:172","msg":"trace[1857410726] linearizableReadLoop","detail":"{readStateIndex:839; appliedIndex:839; }","duration":"248.390431ms","start":"2025-09-26T22:47:04.452051Z","end":"2025-09-26T22:47:04.700441Z","steps":["trace[1857410726] 'read index received'  (duration: 248.385608ms)","trace[1857410726] 'applied index is now lower than readState.Index'  (duration: 4.173µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:47:04.700598Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"248.528271ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:47:04.700622Z","caller":"traceutil/trace.go:172","msg":"trace[1026905540] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:759; }","duration":"248.570159ms","start":"2025-09-26T22:47:04.452046Z","end":"2025-09-26T22:47:04.700616Z","steps":["trace[1026905540] 'agreement among raft nodes before linearized reading'  (duration: 248.50258ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:47:04.701103Z","caller":"traceutil/trace.go:172","msg":"trace[2087222857] transaction","detail":"{read_only:false; response_revision:760; number_of_response:1; }","duration":"565.61594ms","start":"2025-09-26T22:47:04.135469Z","end":"2025-09-26T22:47:04.701085Z","steps":["trace[2087222857] 'process raft request'  (duration: 565.350135ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:47:04.704157Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-26T22:47:04.135452Z","time spent":"566.408192ms","remote":"127.0.0.1:50726","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3489,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/default/mysql-5bb876957f-9dftf\" mod_revision:692 > success:<request_put:<key:\"/registry/pods/default/mysql-5bb876957f-9dftf\" value_size:3436 >> failure:<request_range:<key:\"/registry/pods/default/mysql-5bb876957f-9dftf\" > >"}
	{"level":"info","ts":"2025-09-26T22:47:09.334827Z","caller":"traceutil/trace.go:172","msg":"trace[583310749] transaction","detail":"{read_only:false; response_revision:770; number_of_response:1; }","duration":"149.763445ms","start":"2025-09-26T22:47:09.185040Z","end":"2025-09-26T22:47:09.334804Z","steps":["trace[583310749] 'process raft request'  (duration: 149.524729ms)"],"step_count":1}
	
	
	==> etcd [f49d09c8b58317559bcfb21e8180cfd2763a46ae267b78e3e4b5678a35e180e2] <==
	{"level":"warn","ts":"2025-09-26T22:45:40.742908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:45:40.752807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:45:40.768669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:45:40.784248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:45:40.814637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:45:40.831585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:45:40.881655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40488","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:46:04.171500Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-26T22:46:04.171582Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-615476","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.253:2380"],"advertise-client-urls":["https://192.168.39.253:2379"]}
	{"level":"error","ts":"2025-09-26T22:46:04.171650Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:46:04.249829Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:46:04.251902Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-26T22:46:04.251925Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:46:04.252056Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-26T22:46:04.252064Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:46:04.251947Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3773e8bb706c8f02","current-leader-member-id":"3773e8bb706c8f02"}
	{"level":"info","ts":"2025-09-26T22:46:04.252112Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-26T22:46:04.252122Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-26T22:46:04.252126Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.253:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:46:04.252136Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.253:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-26T22:46:04.252141Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.253:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:46:04.255907Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.253:2380"}
	{"level":"error","ts":"2025-09-26T22:46:04.256066Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.253:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:46:04.256094Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.253:2380"}
	{"level":"info","ts":"2025-09-26T22:46:04.256101Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-615476","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.253:2380"],"advertise-client-urls":["https://192.168.39.253:2379"]}
	
	
	==> kernel <==
	 22:54:14 up 10 min,  0 users,  load average: 0.17, 0.37, 0.29
	Linux functional-615476 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0dec41da22d1cf768a6d75210ed3024a369aeca8cfebc582996501bcbf621994] <==
	I0926 22:46:27.857320       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0926 22:46:29.280377       1 controller.go:667] quota admission added evaluator for: endpoints
	I0926 22:46:29.330226       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0926 22:46:29.382818       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0926 22:46:46.457163       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.16.205"}
	I0926 22:46:50.959198       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.101.177.11"}
	I0926 22:46:52.682281       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.198.229"}
	I0926 22:46:52.792410       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.236.193"}
	E0926 22:47:11.140621       1 conn.go:339] Error on socket receive: read tcp 192.168.39.253:8441->192.168.39.1:60800: use of closed network connection
	E0926 22:47:12.677680       1 conn.go:339] Error on socket receive: read tcp 192.168.39.253:8441->192.168.39.1:60830: use of closed network connection
	I0926 22:47:38.319237       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:47:47.366917       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:48:53.825354       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:49:03.171419       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:49:14.363431       1 controller.go:667] quota admission added evaluator for: namespaces
	I0926 22:49:14.758595       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.252.28"}
	I0926 22:49:14.783559       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.67.13"}
	I0926 22:49:57.581534       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:50:07.809947       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:51:16.110406       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:51:22.621903       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:52:28.270175       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:52:44.558293       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:53:42.628784       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:54:01.742312       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [8acd9802e7a2d71c6ed24901c7c3732d42a3e0850f4e3ce6e3236b92363afdd1] <==
	I0926 22:46:29.031260       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0926 22:46:29.033204       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0926 22:46:29.034404       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0926 22:46:29.035503       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:46:29.036679       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0926 22:46:29.036769       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0926 22:46:29.038151       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0926 22:46:29.039329       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0926 22:46:29.039450       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0926 22:46:29.039503       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0926 22:46:29.040629       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0926 22:46:29.045923       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0926 22:46:29.046084       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0926 22:46:29.046149       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-615476"
	I0926 22:46:29.046207       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0926 22:46:29.048085       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0926 22:46:29.049144       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0926 22:46:29.049169       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0926 22:46:29.057288       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	E0926 22:49:14.491800       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:49:14.517578       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:49:14.527713       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:49:14.540137       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:49:14.566656       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:49:14.566744       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [e43212efa032d59d887115cb52dfa89966099ef1da46f83cc8987c164267fd18] <==
	I0926 22:45:44.991176       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0926 22:45:44.991241       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:45:44.991279       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:45:44.992501       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0926 22:45:44.995894       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0926 22:45:45.000217       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0926 22:45:45.012554       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0926 22:45:45.015110       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0926 22:45:45.018354       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0926 22:45:45.025942       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0926 22:45:45.029491       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0926 22:45:45.033359       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0926 22:45:45.033505       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0926 22:45:45.033586       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0926 22:45:45.033511       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0926 22:45:45.033683       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0926 22:45:45.033688       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0926 22:45:45.033770       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-615476"
	I0926 22:45:45.033805       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0926 22:45:45.034124       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0926 22:45:45.035684       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0926 22:45:45.037418       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0926 22:45:45.040065       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0926 22:45:45.043235       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0926 22:45:45.046542       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [a80a006afe648c98a59b20b7fe9e36eb9506af53b4f30bfbdd37f4b8328f2507] <==
	I0926 22:46:27.342190       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:46:27.450144       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:46:27.450610       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.253"]
	E0926 22:46:27.450956       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:46:27.518105       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0926 22:46:27.518198       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0926 22:46:27.518230       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:46:27.528927       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:46:27.529317       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:46:27.529358       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:46:27.539098       1 config.go:200] "Starting service config controller"
	I0926 22:46:27.539133       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:46:27.539155       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:46:27.539159       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:46:27.539172       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:46:27.539176       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:46:27.539863       1 config.go:309] "Starting node config controller"
	I0926 22:46:27.539932       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:46:27.539938       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:46:27.640557       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0926 22:46:27.640584       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:46:27.640614       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [dd8f1f9dd1b05ab7eb86c295da804c9125c5974695d3102c23d5f5ce56764b27] <==
	I0926 22:45:42.653890       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:45:42.754315       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:45:42.754508       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.253"]
	E0926 22:45:42.754614       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:45:42.817861       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0926 22:45:42.817907       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0926 22:45:42.817927       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:45:42.827907       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:45:42.828319       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:45:42.828350       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:45:42.833351       1 config.go:200] "Starting service config controller"
	I0926 22:45:42.833363       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:45:42.833381       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:45:42.833385       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:45:42.833394       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:45:42.833397       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:45:42.833708       1 config.go:309] "Starting node config controller"
	I0926 22:45:42.833715       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:45:42.833721       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:45:42.933960       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0926 22:45:42.934015       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:45:42.934051       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4da811b46018d31be53a97f95a50a1bf8dedceeac33200ce9f2eed0c7fba2153] <==
	I0926 22:46:23.407588       1 serving.go:386] Generated self-signed cert in-memory
	W0926 22:46:25.521355       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0926 22:46:25.521457       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0926 22:46:25.522061       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0926 22:46:25.522136       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0926 22:46:25.649756       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0926 22:46:25.649806       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:46:25.660256       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:46:25.660301       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:46:25.663670       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0926 22:46:25.663753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0926 22:46:25.761586       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [62733570c49c55bded3a57d9d93052194a6a3821316c573e7ade576d15f2412c] <==
	I0926 22:45:40.794020       1 serving.go:386] Generated self-signed cert in-memory
	W0926 22:45:41.536751       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0926 22:45:41.536877       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0926 22:45:41.536904       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0926 22:45:41.536922       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0926 22:45:41.626506       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0926 22:45:41.626850       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:45:41.629089       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:45:41.629149       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:45:41.633382       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0926 22:45:41.633452       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0926 22:45:41.730122       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:46:04.181214       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0926 22:46:04.181479       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0926 22:46:04.182823       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0926 22:46:04.183288       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 26 22:53:21 functional-615476 kubelet[5910]: E0926 22:53:21.474178    5910 manager.go:1116] Failed to create existing container: /kubepods/burstable/podea4d4941d03a88b7a16ab5be7b589633/crio-f24f250692d59846b41827ff3962036709944e117846612f085b738c4abe0f5f: Error finding container f24f250692d59846b41827ff3962036709944e117846612f085b738c4abe0f5f: Status 404 returned error can't find the container with id f24f250692d59846b41827ff3962036709944e117846612f085b738c4abe0f5f
	Sep 26 22:53:21 functional-615476 kubelet[5910]: E0926 22:53:21.474387    5910 manager.go:1116] Failed to create existing container: /kubepods/burstable/podae66e2d87889a042120fb5a5d085e38f/crio-e0500a9b3f70d0c05e9af787563d7f05d6a56df3c77c0cb7a125bc77d89e2b4f: Error finding container e0500a9b3f70d0c05e9af787563d7f05d6a56df3c77c0cb7a125bc77d89e2b4f: Status 404 returned error can't find the container with id e0500a9b3f70d0c05e9af787563d7f05d6a56df3c77c0cb7a125bc77d89e2b4f
	Sep 26 22:53:21 functional-615476 kubelet[5910]: E0926 22:53:21.474526    5910 manager.go:1116] Failed to create existing container: /kubepods/burstable/podfee94ace-f9a5-4681-a86a-01d8b513d998/crio-c266b29dcb2814b1d471a28ba0f4532ac983e4ea0f702cef62c2d368a95b91cc: Error finding container c266b29dcb2814b1d471a28ba0f4532ac983e4ea0f702cef62c2d368a95b91cc: Status 404 returned error can't find the container with id c266b29dcb2814b1d471a28ba0f4532ac983e4ea0f702cef62c2d368a95b91cc
	Sep 26 22:53:21 functional-615476 kubelet[5910]: E0926 22:53:21.474717    5910 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podc670ee02-4ecb-4f17-b779-1a64005c4259/crio-76c29541bf5066f8d2f1a447db42d573ba10f8f8f6cc3ed623bede46ecb76eaa: Error finding container 76c29541bf5066f8d2f1a447db42d573ba10f8f8f6cc3ed623bede46ecb76eaa: Status 404 returned error can't find the container with id 76c29541bf5066f8d2f1a447db42d573ba10f8f8f6cc3ed623bede46ecb76eaa
	Sep 26 22:53:21 functional-615476 kubelet[5910]: E0926 22:53:21.475039    5910 manager.go:1116] Failed to create existing container: /kubepods/burstable/poddae760326ef99aa8663cb2343716dfa8/crio-99fece67f7f5c3b026ecbbd00d306546e39853faf3f653d6dada27f6e440bbbe: Error finding container 99fece67f7f5c3b026ecbbd00d306546e39853faf3f653d6dada27f6e440bbbe: Status 404 returned error can't find the container with id 99fece67f7f5c3b026ecbbd00d306546e39853faf3f653d6dada27f6e440bbbe
	Sep 26 22:53:21 functional-615476 kubelet[5910]: E0926 22:53:21.475330    5910 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod37d8ee67-d205-47e3-8b92-0c9f65478a89/crio-45797e1be27182a5b9185ee216bbf4062d88e73c976f12fb5a8d0b395982d385: Error finding container 45797e1be27182a5b9185ee216bbf4062d88e73c976f12fb5a8d0b395982d385: Status 404 returned error can't find the container with id 45797e1be27182a5b9185ee216bbf4062d88e73c976f12fb5a8d0b395982d385
	Sep 26 22:53:21 functional-615476 kubelet[5910]: E0926 22:53:21.611841    5910 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758927201611499603  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Sep 26 22:53:21 functional-615476 kubelet[5910]: E0926 22:53:21.611882    5910 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758927201611499603  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Sep 26 22:53:30 functional-615476 kubelet[5910]: E0926 22:53:30.413704    5910 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-wvdjw" podUID="308a2350-8572-448a-aaa7-72edfa592090"
	Sep 26 22:53:31 functional-615476 kubelet[5910]: E0926 22:53:31.614522    5910 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758927211613511577  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Sep 26 22:53:31 functional-615476 kubelet[5910]: E0926 22:53:31.614562    5910 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758927211613511577  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Sep 26 22:53:34 functional-615476 kubelet[5910]: E0926 22:53:34.680851    5910 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:53:34 functional-615476 kubelet[5910]: E0926 22:53:34.680915    5910 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:53:34 functional-615476 kubelet[5910]: E0926 22:53:34.681219    5910 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-qrvbk_kubernetes-dashboard(032d7cb7-7589-4435-9d70-2e690753035c): ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:53:34 functional-615476 kubelet[5910]: E0926 22:53:34.681262    5910 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-qrvbk" podUID="032d7cb7-7589-4435-9d70-2e690753035c"
	Sep 26 22:53:41 functional-615476 kubelet[5910]: E0926 22:53:41.617936    5910 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758927221617688205  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Sep 26 22:53:41 functional-615476 kubelet[5910]: E0926 22:53:41.618026    5910 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758927221617688205  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Sep 26 22:53:44 functional-615476 kubelet[5910]: E0926 22:53:44.413677    5910 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-wvdjw" podUID="308a2350-8572-448a-aaa7-72edfa592090"
	Sep 26 22:53:47 functional-615476 kubelet[5910]: E0926 22:53:47.417326    5910 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-qrvbk" podUID="032d7cb7-7589-4435-9d70-2e690753035c"
	Sep 26 22:53:51 functional-615476 kubelet[5910]: E0926 22:53:51.621944    5910 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758927231621088037  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Sep 26 22:53:51 functional-615476 kubelet[5910]: E0926 22:53:51.622330    5910 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758927231621088037  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Sep 26 22:54:01 functional-615476 kubelet[5910]: E0926 22:54:01.624439    5910 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758927241623951951  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Sep 26 22:54:01 functional-615476 kubelet[5910]: E0926 22:54:01.624460    5910 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758927241623951951  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Sep 26 22:54:11 functional-615476 kubelet[5910]: E0926 22:54:11.627532    5910 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758927251625961843  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Sep 26 22:54:11 functional-615476 kubelet[5910]: E0926 22:54:11.627798    5910 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758927251625961843  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	
	
	==> storage-provisioner [7743b91a59da53d60306b364b523ea638a5b3771441525b3aa46e330d301b9cb] <==
	W0926 22:53:49.627218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:51.631690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:51.636560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:53.640421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:53.647507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:55.650524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:55.655163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:57.659630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:57.668185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:59.671625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:53:59.677477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:54:01.682169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:54:01.687278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:54:03.691069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:54:03.697382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:54:05.701724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:54:05.707441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:54:07.711484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:54:07.720092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:54:09.724122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:54:09.729605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:54:11.733890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:54:11.739908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:54:13.743726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:54:13.754214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e97af5255d2cf93fa9ba2d026e82ac01c1cfd9a81c9fa3e51ca579381e11d0fc] <==
	I0926 22:45:42.531236       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0926 22:45:42.556501       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0926 22:45:42.556688       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0926 22:45:42.561835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:45:46.017034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:45:50.277751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:45:53.876653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:45:56.930884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:45:59.954163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:45:59.966882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0926 22:45:59.967778       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0926 22:45:59.967956       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-615476_e06ef4ee-6afa-483e-84cc-9f0d688113b9!
	I0926 22:45:59.970261       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd30f26c-02dc-403e-bfcd-1fe01b513e27", APIVersion:"v1", ResourceVersion:"538", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-615476_e06ef4ee-6afa-483e-84cc-9f0d688113b9 became leader
	W0926 22:45:59.971547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:45:59.985474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0926 22:46:00.068875       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-615476_e06ef4ee-6afa-483e-84cc-9f0d688113b9!
	W0926 22:46:01.990146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:01.998217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:04.003251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:04.014253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-615476 -n functional-615476
helpers_test.go:269: (dbg) Run:  kubectl --context functional-615476 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-wvdjw hello-node-connect-7d85dfc575-vspp8 sp-pod dashboard-metrics-scraper-77bf4d6c4c-qrvbk kubernetes-dashboard-855c9754f9-6c4r4
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-615476 describe pod busybox-mount hello-node-75c85bcc94-wvdjw hello-node-connect-7d85dfc575-vspp8 sp-pod dashboard-metrics-scraper-77bf4d6c4c-qrvbk kubernetes-dashboard-855c9754f9-6c4r4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-615476 describe pod busybox-mount hello-node-75c85bcc94-wvdjw hello-node-connect-7d85dfc575-vspp8 sp-pod dashboard-metrics-scraper-77bf4d6c4c-qrvbk kubernetes-dashboard-855c9754f9-6c4r4: exit status 1 (95.265387ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-615476/192.168.39.253
	Start Time:       Fri, 26 Sep 2025 22:47:15 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://7b9c309a0de8e1d33c678eae39cdd78cf313b3b25ce81b9400dc8d7d189c1ee8
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 26 Sep 2025 22:49:05 +0000
	      Finished:     Fri, 26 Sep 2025 22:49:05 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2sxg9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-2sxg9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  7m     default-scheduler  Successfully assigned default/busybox-mount to functional-615476
	  Normal  Pulling    7m     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m10s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.395s (1m49.818s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m10s  kubelet            Created container: mount-munger
	  Normal  Started    5m10s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-wvdjw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-615476/192.168.39.253
	Start Time:       Fri, 26 Sep 2025 22:46:52 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7xzxg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7xzxg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  7m23s                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-wvdjw to functional-615476
	  Warning  Failed     71s (x3 over 6m12s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     71s (x3 over 6m12s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    31s (x5 over 6m12s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     31s (x5 over 6m12s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    18s (x4 over 7m21s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-vspp8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-615476/192.168.39.253
	Start Time:       Fri, 26 Sep 2025 22:46:52 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4dx6r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4dx6r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  7m23s                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-vspp8 to functional-615476
	  Warning  Failed     4m40s (x2 over 6m42s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     101s (x3 over 6m42s)   kubelet            Error: ErrImagePull
	  Warning  Failed     101s                   kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    66s (x5 over 6m42s)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     66s (x5 over 6m42s)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    52s (x4 over 7m21s)    kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-615476/192.168.39.253
	Start Time:       Fri, 26 Sep 2025 22:46:59 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvwr4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-vvwr4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  7m16s                  default-scheduler  Successfully assigned default/sp-pod to functional-615476
	  Warning  Failed     5m12s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m25s (x2 over 5m12s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m25s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m12s (x2 over 5m12s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m12s (x2 over 5m12s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    119s (x3 over 7m12s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-qrvbk" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-6c4r4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-615476 describe pod busybox-mount hello-node-75c85bcc94-wvdjw hello-node-connect-7d85dfc575-vspp8 sp-pod dashboard-metrics-scraper-77bf4d6c4c-qrvbk kubernetes-dashboard-855c9754f9-6c4r4: exit status 1
E0926 22:56:32.969387    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/DashboardCmd (302.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-615476 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-615476 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-vspp8" [546709eb-f190-4013-8e4d-8441a5701947] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-615476 -n functional-615476
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-26 22:56:52.987010885 +0000 UTC m=+1678.817410337
functional_test.go:1645: (dbg) Run:  kubectl --context functional-615476 describe po hello-node-connect-7d85dfc575-vspp8 -n default
functional_test.go:1645: (dbg) kubectl --context functional-615476 describe po hello-node-connect-7d85dfc575-vspp8 -n default:
Name:             hello-node-connect-7d85dfc575-vspp8
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-615476/192.168.39.253
Start Time:       Fri, 26 Sep 2025 22:46:52 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4dx6r (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-4dx6r:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-vspp8 to functional-615476
Warning  Failed     4m19s                  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m30s (x4 over 9m59s)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     78s (x3 over 9m20s)    kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     78s (x4 over 9m20s)    kubelet            Error: ErrImagePull
Normal   BackOff    11s (x10 over 9m20s)   kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     11s (x10 over 9m20s)   kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-615476 logs hello-node-connect-7d85dfc575-vspp8 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-615476 logs hello-node-connect-7d85dfc575-vspp8 -n default: exit status 1 (89.462156ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-vspp8" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-615476 logs hello-node-connect-7d85dfc575-vspp8 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-615476 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-vspp8
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-615476/192.168.39.253
Start Time:       Fri, 26 Sep 2025 22:46:52 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4dx6r (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-4dx6r:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-vspp8 to functional-615476
Warning  Failed     4m19s                  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m30s (x4 over 9m59s)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     78s (x3 over 9m20s)    kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     78s (x4 over 9m20s)    kubelet            Error: ErrImagePull
Normal   BackOff    11s (x10 over 9m20s)   kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     11s (x10 over 9m20s)   kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-615476 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-615476 logs -l app=hello-node-connect: exit status 1 (68.57039ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-vspp8" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-615476 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-615476 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.103.198.229
IPs:                      10.103.198.229
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32526/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-615476 -n functional-615476
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-615476 logs -n 25: (1.640303805s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-615476 image ls                                                                                                                                   │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image load --daemon kicbase/echo-server:functional-615476 --alsologtostderr                                                                │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image ls                                                                                                                                   │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image load --daemon kicbase/echo-server:functional-615476 --alsologtostderr                                                                │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image ls                                                                                                                                   │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image save kicbase/echo-server:functional-615476 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image rm kicbase/echo-server:functional-615476 --alsologtostderr                                                                           │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image ls                                                                                                                                   │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image ls                                                                                                                                   │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image save --daemon kicbase/echo-server:functional-615476 --alsologtostderr                                                                │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ start          │ -p functional-615476 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                          │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ start          │ -p functional-615476 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                          │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ start          │ -p functional-615476 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                    │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ update-context │ functional-615476 update-context --alsologtostderr -v=2                                                                                                      │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ update-context │ functional-615476 update-context --alsologtostderr -v=2                                                                                                      │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ update-context │ functional-615476 update-context --alsologtostderr -v=2                                                                                                      │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image ls --format short --alsologtostderr                                                                                                  │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image ls --format yaml --alsologtostderr                                                                                                   │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ ssh            │ functional-615476 ssh pgrep buildkitd                                                                                                                        │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │                     │
	│ image          │ functional-615476 image build -t localhost/my-image:functional-615476 testdata/build --alsologtostderr                                                       │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image ls                                                                                                                                   │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image ls --format json --alsologtostderr                                                                                                   │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ image          │ functional-615476 image ls --format table --alsologtostderr                                                                                                  │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:53 UTC │ 26 Sep 25 22:53 UTC │
	│ service        │ functional-615476 service list                                                                                                                               │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:56 UTC │                     │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:53:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:53:09.442287   21082 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:53:09.442378   21082 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:53:09.442383   21082 out.go:374] Setting ErrFile to fd 2...
	I0926 22:53:09.442390   21082 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:53:09.442587   21082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	I0926 22:53:09.443043   21082 out.go:368] Setting JSON to false
	I0926 22:53:09.443913   21082 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2134,"bootTime":1758925055,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:53:09.444002   21082 start.go:140] virtualization: kvm guest
	I0926 22:53:09.445752   21082 out.go:179] * [functional-615476] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:53:09.447205   21082 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:53:09.447209   21082 notify.go:220] Checking for updates...
	I0926 22:53:09.449890   21082 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:53:09.451124   21082 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 22:53:09.452259   21082 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 22:53:09.453425   21082 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:53:09.454636   21082 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:53:09.456284   21082 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:53:09.456645   21082 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:53:09.456717   21082 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:53:09.473316   21082 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41975
	I0926 22:53:09.473822   21082 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:53:09.474359   21082 main.go:141] libmachine: Using API Version  1
	I0926 22:53:09.474381   21082 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:53:09.474731   21082 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:53:09.474959   21082 main.go:141] libmachine: (functional-615476) Calling .DriverName
	I0926 22:53:09.475258   21082 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:53:09.475653   21082 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:53:09.475691   21082 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:53:09.489228   21082 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34995
	I0926 22:53:09.489717   21082 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:53:09.490212   21082 main.go:141] libmachine: Using API Version  1
	I0926 22:53:09.490243   21082 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:53:09.490680   21082 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:53:09.490902   21082 main.go:141] libmachine: (functional-615476) Calling .DriverName
	I0926 22:53:09.521347   21082 out.go:179] * Using the kvm2 driver based on existing profile
	I0926 22:53:09.522767   21082 start.go:304] selected driver: kvm2
	I0926 22:53:09.522786   21082 start.go:924] validating driver "kvm2" against &{Name:functional-615476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:functional-615476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:53:09.522920   21082 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:53:09.523766   21082 cni.go:84] Creating CNI manager for ""
	I0926 22:53:09.523812   21082 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 22:53:09.523886   21082 start.go:348] cluster config:
	{Name:functional-615476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-615476 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:53:09.525375   21082 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.186469636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758927414186382734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:220510,},InodesUsed:&UInt64Value{Value:108,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=766f8ebc-a8cc-4060-8b13-61438dfc3a0b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.188108443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0d6acfe-e552-4ab3-bd32-752e0a92e0fa name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.188375494Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0d6acfe-e552-4ab3-bd32-752e0a92e0fa name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.188703954Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b9c309a0de8e1d33c678eae39cdd78cf313b3b25ce81b9400dc8d7d189c1ee8,PodSandboxId:31ebf7daf7ee28f21f54ec895cd865f9602998df164c23bec9f8f3bd6efa60c2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758926945770511870,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 857b4229-9648-4b45-804e-37c86a2a4dc0,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de0d368e228a92f9ebd7e7c2797777ecbd9d60705d389cd909525736f523889,PodSandboxId:8416b93fe743eb3afafbafdceb3deef040cd1015848a8460cd1e48d5cd0d1a6b,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1758926822991413480,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-9dftf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6766175-9c7c-4531-8f20-a12f26e25a36,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"cont
ainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45fe3ff46320ea32aa247f627e258123f506258da7ead57b6cebde091ee225b4,PodSandboxId:ef40c7993e34fd7da6143d58b5beb0b492b56c74ecf1148761be49fc38fda440,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758926787258139913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-v7vd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee94ace-f9a5-4681-a86a-01d8b513d998,},Annotations:map[string]string{io.kubernetes.container.hash:
e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80a006afe648c98a59b20b7fe9e36eb9506af53b4f30bfbdd37f4b8328f2507,PodSandboxId:4cb0d48da2119607a250cd4b658844ad2bb100cf221d01a8623ebe3ba554a932,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7
e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758926787040625264,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6bl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d8ee67-d205-47e3-8b92-0c9f65478a89,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7743b91a59da53d60306b364b523ea638a5b3771441525b3aa46e330d301b9cb,PodSandboxId:a0e0c091dad72cd79ec946c0823b621206c8de361ff116982915161dd9898fd1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CO
NTAINER_RUNNING,CreatedAt:1758926787044111244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c670ee02-4ecb-4f17-b779-1a64005c4259,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d848b37253f44d835d19efa2b697021ae05580b06ddc7295b52d4eacbd7f946,PodSandboxId:d85b34d7d4a98b4f44d78941088bcd8fceb56871925f487141dfb9ceed06f57e,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1
758926782190344922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae760326ef99aa8663cb2343716dfa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da811b46018d31be53a97f95a50a1bf8dedceeac33200ce9f2eed0c7fba2153,PodSandboxId:f5146800c1a96e0d6d8dc094cb50f144cafaf290aa24c34751975754131593d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e92
03e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758926782159038603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4d4941d03a88b7a16ab5be7b589633,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dec41da22d1cf768a6d75210ed3024a369aeca8cfebc582996501bcbf621994,PodSandboxId:b818a54650972e892c0a5c23d2a3039ab734a5ad75e57c0d960fd81d31a0c081,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be130888
4b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758926782126439578,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee542dbc7a21c027f2bed47e1ae4a1cc,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8acd9802e7a2d71c6ed24901c7c3732d42a3e0850f4e3ce6e3236b92363afdd1,PodSandboxId:d167b94dbfc3d1d330e2e835305326532397e015471c1c66784cfed6af12e0c8,Metadata:&ContainerMe
tadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758926782037325880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae66e2d87889a042120fb5a5d085e38f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8f1f9dd1b05ab7eb86c295da8
04c9125c5974695d3102c23d5f5ce56764b27,PodSandboxId:45797e1be27182a5b9185ee216bbf4062d88e73c976f12fb5a8d0b395982d385,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758926742399134744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6bl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d8ee67-d205-47e3-8b92-0c9f65478a89,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e97af5255d2cf93fa9ba2d026e82ac01c1cfd9a81c9fa3e51ca579381e11d0
fc,PodSandboxId:76c29541bf5066f8d2f1a447db42d573ba10f8f8f6cc3ed623bede46ecb76eaa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758926742389888897,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c670ee02-4ecb-4f17-b779-1a64005c4259,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e43212efa032d59d887115cb52dfa89966099ef1da46f83cc8987c164267fd18,PodSandbox
Id:e0500a9b3f70d0c05e9af787563d7f05d6a56df3c77c0cb7a125bc77d89e2b4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758926738598716217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae66e2d87889a042120fb5a5d085e38f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49d09c8b58317559bcfb21e8180cfd2763a46ae267b78e3e4b5678a35e180e2,PodSandboxId:99fece67f7f5c3b026ecbbd00d306546e39853faf3f653d6dada27f6e440bbbe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758926738594927588,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae760326ef99aa8663cb2343716dfa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62733570c49c55bded3a57d9d93052194a6a3821316c573e7ade576d15f2412c,PodSandboxId:f24f250692d59846b41827ff3962036709944e117846612f085b738c4abe0f5f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758926738585704672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4d4941d03a88b7a16ab5be7b589633,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fb203aad7a208ee61f02ad4aa37585620a273c95cadced2502ff729a2586ef7,PodSandboxId:c266b29dcb2814b1d471a28ba0f4532ac983e4ea0f702cef62c2d368a95b91cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758926724838253837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-v7vd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee94ace-f9a5-4681-a86a-01d8b513d998,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes
.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0d6acfe-e552-4ab3-bd32-752e0a92e0fa name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.237741347Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=838892ff-02d8-4c16-820d-888f0d4b38f4 name=/runtime.v1.RuntimeService/Version
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.238046751Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=838892ff-02d8-4c16-820d-888f0d4b38f4 name=/runtime.v1.RuntimeService/Version
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.239474960Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bfd80042-b6e4-4459-9443-d5648e4fc0ed name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.240252488Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758927414240227859,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:220510,},InodesUsed:&UInt64Value{Value:108,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bfd80042-b6e4-4459-9443-d5648e4fc0ed name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.241546048Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fcce5fbe-c0e6-43f9-900c-dc9a69d1a10e name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.241597518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fcce5fbe-c0e6-43f9-900c-dc9a69d1a10e name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.241941944Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b9c309a0de8e1d33c678eae39cdd78cf313b3b25ce81b9400dc8d7d189c1ee8,PodSandboxId:31ebf7daf7ee28f21f54ec895cd865f9602998df164c23bec9f8f3bd6efa60c2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758926945770511870,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 857b4229-9648-4b45-804e-37c86a2a4dc0,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de0d368e228a92f9ebd7e7c2797777ecbd9d60705d389cd909525736f523889,PodSandboxId:8416b93fe743eb3afafbafdceb3deef040cd1015848a8460cd1e48d5cd0d1a6b,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1758926822991413480,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-9dftf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6766175-9c7c-4531-8f20-a12f26e25a36,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"cont
ainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45fe3ff46320ea32aa247f627e258123f506258da7ead57b6cebde091ee225b4,PodSandboxId:ef40c7993e34fd7da6143d58b5beb0b492b56c74ecf1148761be49fc38fda440,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758926787258139913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-v7vd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee94ace-f9a5-4681-a86a-01d8b513d998,},Annotations:map[string]string{io.kubernetes.container.hash:
e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80a006afe648c98a59b20b7fe9e36eb9506af53b4f30bfbdd37f4b8328f2507,PodSandboxId:4cb0d48da2119607a250cd4b658844ad2bb100cf221d01a8623ebe3ba554a932,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7
e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758926787040625264,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6bl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d8ee67-d205-47e3-8b92-0c9f65478a89,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7743b91a59da53d60306b364b523ea638a5b3771441525b3aa46e330d301b9cb,PodSandboxId:a0e0c091dad72cd79ec946c0823b621206c8de361ff116982915161dd9898fd1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CO
NTAINER_RUNNING,CreatedAt:1758926787044111244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c670ee02-4ecb-4f17-b779-1a64005c4259,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d848b37253f44d835d19efa2b697021ae05580b06ddc7295b52d4eacbd7f946,PodSandboxId:d85b34d7d4a98b4f44d78941088bcd8fceb56871925f487141dfb9ceed06f57e,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1
758926782190344922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae760326ef99aa8663cb2343716dfa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da811b46018d31be53a97f95a50a1bf8dedceeac33200ce9f2eed0c7fba2153,PodSandboxId:f5146800c1a96e0d6d8dc094cb50f144cafaf290aa24c34751975754131593d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e92
03e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758926782159038603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4d4941d03a88b7a16ab5be7b589633,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dec41da22d1cf768a6d75210ed3024a369aeca8cfebc582996501bcbf621994,PodSandboxId:b818a54650972e892c0a5c23d2a3039ab734a5ad75e57c0d960fd81d31a0c081,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be130888
4b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758926782126439578,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee542dbc7a21c027f2bed47e1ae4a1cc,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8acd9802e7a2d71c6ed24901c7c3732d42a3e0850f4e3ce6e3236b92363afdd1,PodSandboxId:d167b94dbfc3d1d330e2e835305326532397e015471c1c66784cfed6af12e0c8,Metadata:&ContainerMe
tadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758926782037325880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae66e2d87889a042120fb5a5d085e38f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8f1f9dd1b05ab7eb86c295da8
04c9125c5974695d3102c23d5f5ce56764b27,PodSandboxId:45797e1be27182a5b9185ee216bbf4062d88e73c976f12fb5a8d0b395982d385,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758926742399134744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6bl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d8ee67-d205-47e3-8b92-0c9f65478a89,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e97af5255d2cf93fa9ba2d026e82ac01c1cfd9a81c9fa3e51ca579381e11d0
fc,PodSandboxId:76c29541bf5066f8d2f1a447db42d573ba10f8f8f6cc3ed623bede46ecb76eaa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758926742389888897,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c670ee02-4ecb-4f17-b779-1a64005c4259,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e43212efa032d59d887115cb52dfa89966099ef1da46f83cc8987c164267fd18,PodSandbox
Id:e0500a9b3f70d0c05e9af787563d7f05d6a56df3c77c0cb7a125bc77d89e2b4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758926738598716217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae66e2d87889a042120fb5a5d085e38f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49d09c8b58317559bcfb21e8180cfd2763a46ae267b78e3e4b5678a35e180e2,PodSandboxId:99fece67f7f5c3b026ecbbd00d306546e39853faf3f653d6dada27f6e440bbbe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758926738594927588,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae760326ef99aa8663cb2343716dfa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62733570c49c55bded3a57d9d93052194a6a3821316c573e7ade576d15f2412c,PodSandboxId:f24f250692d59846b41827ff3962036709944e117846612f085b738c4abe0f5f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758926738585704672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4d4941d03a88b7a16ab5be7b589633,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fb203aad7a208ee61f02ad4aa37585620a273c95cadced2502ff729a2586ef7,PodSandboxId:c266b29dcb2814b1d471a28ba0f4532ac983e4ea0f702cef62c2d368a95b91cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758926724838253837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-v7vd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee94ace-f9a5-4681-a86a-01d8b513d998,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes
.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fcce5fbe-c0e6-43f9-900c-dc9a69d1a10e name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.281552827Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16778701-a98a-4a78-8c68-6a42caa9cd4a name=/runtime.v1.RuntimeService/Version
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.281932405Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16778701-a98a-4a78-8c68-6a42caa9cd4a name=/runtime.v1.RuntimeService/Version
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.283309936Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=27a44dcc-5c5e-4e65-9c3a-222fd90b6852 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.284070341Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758927414284047548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:220510,},InodesUsed:&UInt64Value{Value:108,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27a44dcc-5c5e-4e65-9c3a-222fd90b6852 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.284783753Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0c1f316-ca53-442e-a374-25d352fe6480 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.284837812Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0c1f316-ca53-442e-a374-25d352fe6480 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.285214015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b9c309a0de8e1d33c678eae39cdd78cf313b3b25ce81b9400dc8d7d189c1ee8,PodSandboxId:31ebf7daf7ee28f21f54ec895cd865f9602998df164c23bec9f8f3bd6efa60c2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758926945770511870,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 857b4229-9648-4b45-804e-37c86a2a4dc0,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de0d368e228a92f9ebd7e7c2797777ecbd9d60705d389cd909525736f523889,PodSandboxId:8416b93fe743eb3afafbafdceb3deef040cd1015848a8460cd1e48d5cd0d1a6b,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1758926822991413480,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-9dftf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6766175-9c7c-4531-8f20-a12f26e25a36,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"cont
ainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45fe3ff46320ea32aa247f627e258123f506258da7ead57b6cebde091ee225b4,PodSandboxId:ef40c7993e34fd7da6143d58b5beb0b492b56c74ecf1148761be49fc38fda440,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758926787258139913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-v7vd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee94ace-f9a5-4681-a86a-01d8b513d998,},Annotations:map[string]string{io.kubernetes.container.hash:
e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80a006afe648c98a59b20b7fe9e36eb9506af53b4f30bfbdd37f4b8328f2507,PodSandboxId:4cb0d48da2119607a250cd4b658844ad2bb100cf221d01a8623ebe3ba554a932,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7
e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758926787040625264,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6bl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d8ee67-d205-47e3-8b92-0c9f65478a89,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7743b91a59da53d60306b364b523ea638a5b3771441525b3aa46e330d301b9cb,PodSandboxId:a0e0c091dad72cd79ec946c0823b621206c8de361ff116982915161dd9898fd1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CO
NTAINER_RUNNING,CreatedAt:1758926787044111244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c670ee02-4ecb-4f17-b779-1a64005c4259,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d848b37253f44d835d19efa2b697021ae05580b06ddc7295b52d4eacbd7f946,PodSandboxId:d85b34d7d4a98b4f44d78941088bcd8fceb56871925f487141dfb9ceed06f57e,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1
758926782190344922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae760326ef99aa8663cb2343716dfa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da811b46018d31be53a97f95a50a1bf8dedceeac33200ce9f2eed0c7fba2153,PodSandboxId:f5146800c1a96e0d6d8dc094cb50f144cafaf290aa24c34751975754131593d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e92
03e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758926782159038603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4d4941d03a88b7a16ab5be7b589633,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dec41da22d1cf768a6d75210ed3024a369aeca8cfebc582996501bcbf621994,PodSandboxId:b818a54650972e892c0a5c23d2a3039ab734a5ad75e57c0d960fd81d31a0c081,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be130888
4b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758926782126439578,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee542dbc7a21c027f2bed47e1ae4a1cc,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8acd9802e7a2d71c6ed24901c7c3732d42a3e0850f4e3ce6e3236b92363afdd1,PodSandboxId:d167b94dbfc3d1d330e2e835305326532397e015471c1c66784cfed6af12e0c8,Metadata:&ContainerMe
tadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758926782037325880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae66e2d87889a042120fb5a5d085e38f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8f1f9dd1b05ab7eb86c295da8
04c9125c5974695d3102c23d5f5ce56764b27,PodSandboxId:45797e1be27182a5b9185ee216bbf4062d88e73c976f12fb5a8d0b395982d385,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758926742399134744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6bl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d8ee67-d205-47e3-8b92-0c9f65478a89,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e97af5255d2cf93fa9ba2d026e82ac01c1cfd9a81c9fa3e51ca579381e11d0
fc,PodSandboxId:76c29541bf5066f8d2f1a447db42d573ba10f8f8f6cc3ed623bede46ecb76eaa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758926742389888897,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c670ee02-4ecb-4f17-b779-1a64005c4259,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e43212efa032d59d887115cb52dfa89966099ef1da46f83cc8987c164267fd18,PodSandbox
Id:e0500a9b3f70d0c05e9af787563d7f05d6a56df3c77c0cb7a125bc77d89e2b4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758926738598716217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae66e2d87889a042120fb5a5d085e38f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49d09c8b58317559bcfb21e8180cfd2763a46ae267b78e3e4b5678a35e180e2,PodSandboxId:99fece67f7f5c3b026ecbbd00d306546e39853faf3f653d6dada27f6e440bbbe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758926738594927588,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae760326ef99aa8663cb2343716dfa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62733570c49c55bded3a57d9d93052194a6a3821316c573e7ade576d15f2412c,PodSandboxId:f24f250692d59846b41827ff3962036709944e117846612f085b738c4abe0f5f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758926738585704672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4d4941d03a88b7a16ab5be7b589633,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fb203aad7a208ee61f02ad4aa37585620a273c95cadced2502ff729a2586ef7,PodSandboxId:c266b29dcb2814b1d471a28ba0f4532ac983e4ea0f702cef62c2d368a95b91cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758926724838253837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-v7vd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee94ace-f9a5-4681-a86a-01d8b513d998,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes
.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0c1f316-ca53-442e-a374-25d352fe6480 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.326904323Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fbdb475c-da0e-4b2b-8bd3-3663309ecef3 name=/runtime.v1.RuntimeService/Version
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.327056220Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fbdb475c-da0e-4b2b-8bd3-3663309ecef3 name=/runtime.v1.RuntimeService/Version
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.328558932Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2326cecb-ea19-41d4-b395-664caeb1c809 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.329357563Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758927414329321494,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:220510,},InodesUsed:&UInt64Value{Value:108,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2326cecb-ea19-41d4-b395-664caeb1c809 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.330065551Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0150426f-8adb-481d-947d-f70e4a7f7d03 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.330276809Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0150426f-8adb-481d-947d-f70e4a7f7d03 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 22:56:54 functional-615476 crio[5567]: time="2025-09-26 22:56:54.331212828Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b9c309a0de8e1d33c678eae39cdd78cf313b3b25ce81b9400dc8d7d189c1ee8,PodSandboxId:31ebf7daf7ee28f21f54ec895cd865f9602998df164c23bec9f8f3bd6efa60c2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758926945770511870,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 857b4229-9648-4b45-804e-37c86a2a4dc0,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2de0d368e228a92f9ebd7e7c2797777ecbd9d60705d389cd909525736f523889,PodSandboxId:8416b93fe743eb3afafbafdceb3deef040cd1015848a8460cd1e48d5cd0d1a6b,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1758926822991413480,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-9dftf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6766175-9c7c-4531-8f20-a12f26e25a36,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"cont
ainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45fe3ff46320ea32aa247f627e258123f506258da7ead57b6cebde091ee225b4,PodSandboxId:ef40c7993e34fd7da6143d58b5beb0b492b56c74ecf1148761be49fc38fda440,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758926787258139913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-v7vd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee94ace-f9a5-4681-a86a-01d8b513d998,},Annotations:map[string]string{io.kubernetes.container.hash:
e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80a006afe648c98a59b20b7fe9e36eb9506af53b4f30bfbdd37f4b8328f2507,PodSandboxId:4cb0d48da2119607a250cd4b658844ad2bb100cf221d01a8623ebe3ba554a932,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7
e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758926787040625264,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6bl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d8ee67-d205-47e3-8b92-0c9f65478a89,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7743b91a59da53d60306b364b523ea638a5b3771441525b3aa46e330d301b9cb,PodSandboxId:a0e0c091dad72cd79ec946c0823b621206c8de361ff116982915161dd9898fd1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CO
NTAINER_RUNNING,CreatedAt:1758926787044111244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c670ee02-4ecb-4f17-b779-1a64005c4259,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d848b37253f44d835d19efa2b697021ae05580b06ddc7295b52d4eacbd7f946,PodSandboxId:d85b34d7d4a98b4f44d78941088bcd8fceb56871925f487141dfb9ceed06f57e,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1
758926782190344922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae760326ef99aa8663cb2343716dfa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da811b46018d31be53a97f95a50a1bf8dedceeac33200ce9f2eed0c7fba2153,PodSandboxId:f5146800c1a96e0d6d8dc094cb50f144cafaf290aa24c34751975754131593d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e92
03e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758926782159038603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4d4941d03a88b7a16ab5be7b589633,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dec41da22d1cf768a6d75210ed3024a369aeca8cfebc582996501bcbf621994,PodSandboxId:b818a54650972e892c0a5c23d2a3039ab734a5ad75e57c0d960fd81d31a0c081,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be130888
4b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758926782126439578,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee542dbc7a21c027f2bed47e1ae4a1cc,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8acd9802e7a2d71c6ed24901c7c3732d42a3e0850f4e3ce6e3236b92363afdd1,PodSandboxId:d167b94dbfc3d1d330e2e835305326532397e015471c1c66784cfed6af12e0c8,Metadata:&ContainerMe
tadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758926782037325880,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae66e2d87889a042120fb5a5d085e38f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8f1f9dd1b05ab7eb86c295da8
04c9125c5974695d3102c23d5f5ce56764b27,PodSandboxId:45797e1be27182a5b9185ee216bbf4062d88e73c976f12fb5a8d0b395982d385,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758926742399134744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k6bl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d8ee67-d205-47e3-8b92-0c9f65478a89,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e97af5255d2cf93fa9ba2d026e82ac01c1cfd9a81c9fa3e51ca579381e11d0
fc,PodSandboxId:76c29541bf5066f8d2f1a447db42d573ba10f8f8f6cc3ed623bede46ecb76eaa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758926742389888897,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c670ee02-4ecb-4f17-b779-1a64005c4259,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e43212efa032d59d887115cb52dfa89966099ef1da46f83cc8987c164267fd18,PodSandbox
Id:e0500a9b3f70d0c05e9af787563d7f05d6a56df3c77c0cb7a125bc77d89e2b4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758926738598716217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae66e2d87889a042120fb5a5d085e38f,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49d09c8b58317559bcfb21e8180cfd2763a46ae267b78e3e4b5678a35e180e2,PodSandboxId:99fece67f7f5c3b026ecbbd00d306546e39853faf3f653d6dada27f6e440bbbe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758926738594927588,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dae760326ef99aa8663cb2343716dfa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62733570c49c55bded3a57d9d93052194a6a3821316c573e7ade576d15f2412c,PodSandboxId:f24f250692d59846b41827ff3962036709944e117846612f085b738c4abe0f5f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758926738585704672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-615476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4d4941d03a88b7a16ab5be7b589633,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fb203aad7a208ee61f02ad4aa37585620a273c95cadced2502ff729a2586ef7,PodSandboxId:c266b29dcb2814b1d471a28ba0f4532ac983e4ea0f702cef62c2d368a95b91cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758926724838253837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-v7vd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fee94ace-f9a5-4681-a86a-01d8b513d998,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes
.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0150426f-8adb-481d-947d-f70e4a7f7d03 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7b9c309a0de8e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   7 minutes ago       Exited              mount-munger              0                   31ebf7daf7ee2       busybox-mount
	2de0d368e228a       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb       9 minutes ago       Running             mysql                     0                   8416b93fe743e       mysql-5bb876957f-9dftf
	45fe3ff46320e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 minutes ago      Running             coredns                   2                   ef40c7993e34f       coredns-66bc5c9577-v7vd6
	7743b91a59da5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       3                   a0e0c091dad72       storage-provisioner
	a80a006afe648       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      10 minutes ago      Running             kube-proxy                3                   4cb0d48da2119       kube-proxy-k6bl8
	3d848b37253f4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      3                   d85b34d7d4a98       etcd-functional-615476
	4da811b46018d       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      10 minutes ago      Running             kube-scheduler            3                   f5146800c1a96       kube-scheduler-functional-615476
	0dec41da22d1c       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      10 minutes ago      Running             kube-apiserver            0                   b818a54650972       kube-apiserver-functional-615476
	8acd9802e7a2d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      10 minutes ago      Running             kube-controller-manager   3                   d167b94dbfc3d       kube-controller-manager-functional-615476
	dd8f1f9dd1b05       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      11 minutes ago      Exited              kube-proxy                2                   45797e1be2718       kube-proxy-k6bl8
	e97af5255d2cf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Exited              storage-provisioner       2                   76c29541bf506       storage-provisioner
	e43212efa032d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      11 minutes ago      Exited              kube-controller-manager   2                   e0500a9b3f70d       kube-controller-manager-functional-615476
	f49d09c8b5831       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago      Exited              etcd                      2                   99fece67f7f5c       etcd-functional-615476
	62733570c49c5       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      11 minutes ago      Exited              kube-scheduler            2                   f24f250692d59       kube-scheduler-functional-615476
	1fb203aad7a20       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   1                   c266b29dcb281       coredns-66bc5c9577-v7vd6
	
	
	==> coredns [1fb203aad7a208ee61f02ad4aa37585620a273c95cadced2502ff729a2586ef7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36907 - 55343 "HINFO IN 6257610588922282964.2426922464311900739. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015428862s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [45fe3ff46320ea32aa247f627e258123f506258da7ead57b6cebde091ee225b4] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34955 - 18075 "HINFO IN 4014493224373250944.4656664838483367357. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022262791s
	
	
	==> describe nodes <==
	Name:               functional-615476
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-615476
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=functional-615476
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_44_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:44:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-615476
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:56:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:56:28 +0000   Fri, 26 Sep 2025 22:44:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:56:28 +0000   Fri, 26 Sep 2025 22:44:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:56:28 +0000   Fri, 26 Sep 2025 22:44:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:56:28 +0000   Fri, 26 Sep 2025 22:44:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    functional-615476
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 c75779210dd54d8eabe01e46abd06b89
	  System UUID:                c7577921-0dd5-4d8e-abe0-1e46abd06b89
	  Boot ID:                    133b97ef-7d02-4fed-9cdb-f54cfe63448f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-wvdjw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-vspp8           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-9dftf                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 coredns-66bc5c9577-v7vd6                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-615476                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-615476              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-615476     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-k6bl8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-615476              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-qrvbk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m40s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6c4r4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-615476 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-615476 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-615476 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeReady                12m                kubelet          Node functional-615476 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node functional-615476 event: Registered Node functional-615476 in Controller
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-615476 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-615476 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-615476 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-615476 event: Registered Node functional-615476 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-615476 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-615476 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-615476 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-615476 event: Registered Node functional-615476 in Controller
	
	
	==> dmesg <==
	[  +0.006999] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.168553] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.088994] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.098731] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.132093] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.092522] kauditd_printk_skb: 18 callbacks suppressed
	[ +11.191379] kauditd_printk_skb: 249 callbacks suppressed
	[Sep26 22:45] kauditd_printk_skb: 38 callbacks suppressed
	[  +2.976415] kauditd_printk_skb: 349 callbacks suppressed
	[  +5.036591] kauditd_printk_skb: 108 callbacks suppressed
	[  +4.122420] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.593057] kauditd_printk_skb: 2 callbacks suppressed
	[Sep26 22:46] kauditd_printk_skb: 12 callbacks suppressed
	[  +1.029609] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.588088] kauditd_printk_skb: 162 callbacks suppressed
	[  +7.416315] kauditd_printk_skb: 133 callbacks suppressed
	[  +0.033557] kauditd_printk_skb: 109 callbacks suppressed
	[Sep26 22:47] kauditd_printk_skb: 98 callbacks suppressed
	[  +0.000059] kauditd_printk_skb: 47 callbacks suppressed
	[ +17.552065] kauditd_printk_skb: 26 callbacks suppressed
	[Sep26 22:49] kauditd_printk_skb: 25 callbacks suppressed
	[Sep26 22:50] kauditd_printk_skb: 74 callbacks suppressed
	[Sep26 22:53] crun[9747]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [3d848b37253f44d835d19efa2b697021ae05580b06ddc7295b52d4eacbd7f946] <==
	{"level":"info","ts":"2025-09-26T22:46:57.126864Z","caller":"traceutil/trace.go:172","msg":"trace[1012089057] linearizableReadLoop","detail":"{readStateIndex:816; appliedIndex:816; }","duration":"119.281952ms","start":"2025-09-26T22:46:57.007555Z","end":"2025-09-26T22:46:57.126837Z","steps":["trace[1012089057] 'read index received'  (duration: 119.268607ms)","trace[1012089057] 'applied index is now lower than readState.Index'  (duration: 12.222µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:46:57.311111Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"303.47617ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/myclaim\" limit:1 ","response":"range_response_count:1 size:842"}
	{"level":"info","ts":"2025-09-26T22:46:57.311270Z","caller":"traceutil/trace.go:172","msg":"trace[235565901] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/myclaim; range_end:; response_count:1; response_revision:738; }","duration":"303.707395ms","start":"2025-09-26T22:46:57.007551Z","end":"2025-09-26T22:46:57.311258Z","steps":["trace[235565901] 'agreement among raft nodes before linearized reading'  (duration: 119.38191ms)","trace[235565901] 'range keys from in-memory index tree'  (duration: 184.004938ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:46:57.311305Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-26T22:46:57.007535Z","time spent":"303.756844ms","remote":"127.0.0.1:50680","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":864,"request content":"key:\"/registry/persistentvolumeclaims/default/myclaim\" limit:1 "}
	{"level":"warn","ts":"2025-09-26T22:46:57.312077Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.595808ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10304967707656802226 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/persistentvolumeclaims/default/myclaim\" mod_revision:738 > success:<request_put:<key:\"/registry/persistentvolumeclaims/default/myclaim\" value_size:1127 >> failure:<request_range:<key:\"/registry/persistentvolumeclaims/default/myclaim\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-26T22:46:57.312173Z","caller":"traceutil/trace.go:172","msg":"trace[2105615370] linearizableReadLoop","detail":"{readStateIndex:817; appliedIndex:816; }","duration":"185.123109ms","start":"2025-09-26T22:46:57.127018Z","end":"2025-09-26T22:46:57.312141Z","steps":["trace[2105615370] 'read index received'  (duration: 47.895µs)","trace[2105615370] 'applied index is now lower than readState.Index'  (duration: 185.07456ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:46:57.312214Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"293.881293ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:46:57.312225Z","caller":"traceutil/trace.go:172","msg":"trace[935236764] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:739; }","duration":"293.896198ms","start":"2025-09-26T22:46:57.018325Z","end":"2025-09-26T22:46:57.312221Z","steps":["trace[935236764] 'agreement among raft nodes before linearized reading'  (duration: 293.866397ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:46:57.312264Z","caller":"traceutil/trace.go:172","msg":"trace[1838719913] transaction","detail":"{read_only:false; response_revision:739; number_of_response:1; }","duration":"350.669114ms","start":"2025-09-26T22:46:56.961582Z","end":"2025-09-26T22:46:57.312251Z","steps":["trace[1838719913] 'process raft request'  (duration: 165.345483ms)","trace[1838719913] 'compare'  (duration: 184.533787ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:46:57.312335Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-26T22:46:56.961561Z","time spent":"350.73697ms","remote":"127.0.0.1:50680","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1183,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/persistentvolumeclaims/default/myclaim\" mod_revision:738 > success:<request_put:<key:\"/registry/persistentvolumeclaims/default/myclaim\" value_size:1127 >> failure:<request_range:<key:\"/registry/persistentvolumeclaims/default/myclaim\" > >"}
	{"level":"warn","ts":"2025-09-26T22:46:57.312374Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"274.830374ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.253\" limit:1 ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2025-09-26T22:46:57.312388Z","caller":"traceutil/trace.go:172","msg":"trace[1332736172] range","detail":"{range_begin:/registry/masterleases/192.168.39.253; range_end:; response_count:1; response_revision:739; }","duration":"274.845073ms","start":"2025-09-26T22:46:57.037539Z","end":"2025-09-26T22:46:57.312384Z","steps":["trace[1332736172] 'agreement among raft nodes before linearized reading'  (duration: 274.785102ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:46:59.058206Z","caller":"traceutil/trace.go:172","msg":"trace[1509418752] transaction","detail":"{read_only:false; response_revision:749; number_of_response:1; }","duration":"205.612528ms","start":"2025-09-26T22:46:58.852572Z","end":"2025-09-26T22:46:59.058185Z","steps":["trace[1509418752] 'process raft request'  (duration: 205.489932ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:47:02.651609Z","caller":"traceutil/trace.go:172","msg":"trace[2020070850] linearizableReadLoop","detail":"{readStateIndex:833; appliedIndex:833; }","duration":"199.346576ms","start":"2025-09-26T22:47:02.452246Z","end":"2025-09-26T22:47:02.651592Z","steps":["trace[2020070850] 'read index received'  (duration: 199.342017ms)","trace[2020070850] 'applied index is now lower than readState.Index'  (duration: 3.873µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:47:02.715486Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"263.186548ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:47:02.715801Z","caller":"traceutil/trace.go:172","msg":"trace[1310166365] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:754; }","duration":"263.545141ms","start":"2025-09-26T22:47:02.452242Z","end":"2025-09-26T22:47:02.715787Z","steps":["trace[1310166365] 'agreement among raft nodes before linearized reading'  (duration: 199.41842ms)","trace[1310166365] 'range keys from in-memory index tree'  (duration: 63.750113ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-26T22:47:04.700455Z","caller":"traceutil/trace.go:172","msg":"trace[1857410726] linearizableReadLoop","detail":"{readStateIndex:839; appliedIndex:839; }","duration":"248.390431ms","start":"2025-09-26T22:47:04.452051Z","end":"2025-09-26T22:47:04.700441Z","steps":["trace[1857410726] 'read index received'  (duration: 248.385608ms)","trace[1857410726] 'applied index is now lower than readState.Index'  (duration: 4.173µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T22:47:04.700598Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"248.528271ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:47:04.700622Z","caller":"traceutil/trace.go:172","msg":"trace[1026905540] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:759; }","duration":"248.570159ms","start":"2025-09-26T22:47:04.452046Z","end":"2025-09-26T22:47:04.700616Z","steps":["trace[1026905540] 'agreement among raft nodes before linearized reading'  (duration: 248.50258ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:47:04.701103Z","caller":"traceutil/trace.go:172","msg":"trace[2087222857] transaction","detail":"{read_only:false; response_revision:760; number_of_response:1; }","duration":"565.61594ms","start":"2025-09-26T22:47:04.135469Z","end":"2025-09-26T22:47:04.701085Z","steps":["trace[2087222857] 'process raft request'  (duration: 565.350135ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:47:04.704157Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-26T22:47:04.135452Z","time spent":"566.408192ms","remote":"127.0.0.1:50726","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3489,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/default/mysql-5bb876957f-9dftf\" mod_revision:692 > success:<request_put:<key:\"/registry/pods/default/mysql-5bb876957f-9dftf\" value_size:3436 >> failure:<request_range:<key:\"/registry/pods/default/mysql-5bb876957f-9dftf\" > >"}
	{"level":"info","ts":"2025-09-26T22:47:09.334827Z","caller":"traceutil/trace.go:172","msg":"trace[583310749] transaction","detail":"{read_only:false; response_revision:770; number_of_response:1; }","duration":"149.763445ms","start":"2025-09-26T22:47:09.185040Z","end":"2025-09-26T22:47:09.334804Z","steps":["trace[583310749] 'process raft request'  (duration: 149.524729ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:56:23.690285Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1099}
	{"level":"info","ts":"2025-09-26T22:56:23.716656Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1099,"took":"25.992032ms","hash":815301403,"current-db-size-bytes":3444736,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1552384,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-09-26T22:56:23.716715Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":815301403,"revision":1099,"compact-revision":-1}
	
	
	==> etcd [f49d09c8b58317559bcfb21e8180cfd2763a46ae267b78e3e4b5678a35e180e2] <==
	{"level":"warn","ts":"2025-09-26T22:45:40.742908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:45:40.752807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:45:40.768669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:45:40.784248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:45:40.814637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:45:40.831585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:45:40.881655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40488","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:46:04.171500Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-26T22:46:04.171582Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-615476","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.253:2380"],"advertise-client-urls":["https://192.168.39.253:2379"]}
	{"level":"error","ts":"2025-09-26T22:46:04.171650Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:46:04.249829Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:46:04.251902Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-26T22:46:04.251925Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:46:04.252056Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-26T22:46:04.252064Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:46:04.251947Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3773e8bb706c8f02","current-leader-member-id":"3773e8bb706c8f02"}
	{"level":"info","ts":"2025-09-26T22:46:04.252112Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-26T22:46:04.252122Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-26T22:46:04.252126Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.253:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:46:04.252136Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.253:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-26T22:46:04.252141Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.253:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:46:04.255907Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.253:2380"}
	{"level":"error","ts":"2025-09-26T22:46:04.256066Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.253:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:46:04.256094Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.253:2380"}
	{"level":"info","ts":"2025-09-26T22:46:04.256101Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-615476","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.253:2380"],"advertise-client-urls":["https://192.168.39.253:2379"]}
	
	
	==> kernel <==
	 22:56:54 up 12 min,  0 users,  load average: 0.07, 0.24, 0.25
	Linux functional-615476 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0dec41da22d1cf768a6d75210ed3024a369aeca8cfebc582996501bcbf621994] <==
	I0926 22:46:50.959198       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.101.177.11"}
	I0926 22:46:52.682281       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.198.229"}
	I0926 22:46:52.792410       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.236.193"}
	E0926 22:47:11.140621       1 conn.go:339] Error on socket receive: read tcp 192.168.39.253:8441->192.168.39.1:60800: use of closed network connection
	E0926 22:47:12.677680       1 conn.go:339] Error on socket receive: read tcp 192.168.39.253:8441->192.168.39.1:60830: use of closed network connection
	I0926 22:47:38.319237       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:47:47.366917       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:48:53.825354       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:49:03.171419       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:49:14.363431       1 controller.go:667] quota admission added evaluator for: namespaces
	I0926 22:49:14.758595       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.252.28"}
	I0926 22:49:14.783559       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.67.13"}
	I0926 22:49:57.581534       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:50:07.809947       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:51:16.110406       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:51:22.621903       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:52:28.270175       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:52:44.558293       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:53:42.628784       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:54:01.742312       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:54:51.740565       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:55:17.668201       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:56:16.717092       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:56:25.598109       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0926 22:56:28.704493       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [8acd9802e7a2d71c6ed24901c7c3732d42a3e0850f4e3ce6e3236b92363afdd1] <==
	I0926 22:46:29.031260       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0926 22:46:29.033204       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0926 22:46:29.034404       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0926 22:46:29.035503       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:46:29.036679       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0926 22:46:29.036769       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0926 22:46:29.038151       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0926 22:46:29.039329       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0926 22:46:29.039450       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0926 22:46:29.039503       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0926 22:46:29.040629       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0926 22:46:29.045923       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0926 22:46:29.046084       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0926 22:46:29.046149       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-615476"
	I0926 22:46:29.046207       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0926 22:46:29.048085       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0926 22:46:29.049144       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0926 22:46:29.049169       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0926 22:46:29.057288       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	E0926 22:49:14.491800       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:49:14.517578       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:49:14.527713       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:49:14.540137       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:49:14.566656       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:49:14.566744       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [e43212efa032d59d887115cb52dfa89966099ef1da46f83cc8987c164267fd18] <==
	I0926 22:45:44.991176       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0926 22:45:44.991241       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:45:44.991279       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:45:44.992501       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0926 22:45:44.995894       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0926 22:45:45.000217       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0926 22:45:45.012554       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0926 22:45:45.015110       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0926 22:45:45.018354       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0926 22:45:45.025942       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0926 22:45:45.029491       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0926 22:45:45.033359       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0926 22:45:45.033505       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0926 22:45:45.033586       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0926 22:45:45.033511       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0926 22:45:45.033683       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0926 22:45:45.033688       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0926 22:45:45.033770       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-615476"
	I0926 22:45:45.033805       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0926 22:45:45.034124       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0926 22:45:45.035684       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0926 22:45:45.037418       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0926 22:45:45.040065       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0926 22:45:45.043235       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0926 22:45:45.046542       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [a80a006afe648c98a59b20b7fe9e36eb9506af53b4f30bfbdd37f4b8328f2507] <==
	I0926 22:46:27.342190       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:46:27.450144       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:46:27.450610       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.253"]
	E0926 22:46:27.450956       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:46:27.518105       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0926 22:46:27.518198       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0926 22:46:27.518230       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:46:27.528927       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:46:27.529317       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:46:27.529358       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:46:27.539098       1 config.go:200] "Starting service config controller"
	I0926 22:46:27.539133       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:46:27.539155       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:46:27.539159       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:46:27.539172       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:46:27.539176       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:46:27.539863       1 config.go:309] "Starting node config controller"
	I0926 22:46:27.539932       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:46:27.539938       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:46:27.640557       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0926 22:46:27.640584       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:46:27.640614       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [dd8f1f9dd1b05ab7eb86c295da804c9125c5974695d3102c23d5f5ce56764b27] <==
	I0926 22:45:42.653890       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:45:42.754315       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:45:42.754508       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.253"]
	E0926 22:45:42.754614       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:45:42.817861       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0926 22:45:42.817907       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0926 22:45:42.817927       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:45:42.827907       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:45:42.828319       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:45:42.828350       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:45:42.833351       1 config.go:200] "Starting service config controller"
	I0926 22:45:42.833363       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:45:42.833381       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:45:42.833385       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:45:42.833394       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:45:42.833397       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:45:42.833708       1 config.go:309] "Starting node config controller"
	I0926 22:45:42.833715       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:45:42.833721       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:45:42.933960       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0926 22:45:42.934015       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:45:42.934051       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4da811b46018d31be53a97f95a50a1bf8dedceeac33200ce9f2eed0c7fba2153] <==
	I0926 22:46:23.407588       1 serving.go:386] Generated self-signed cert in-memory
	W0926 22:46:25.521355       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0926 22:46:25.521457       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0926 22:46:25.522061       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0926 22:46:25.522136       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0926 22:46:25.649756       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0926 22:46:25.649806       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:46:25.660256       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:46:25.660301       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:46:25.663670       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0926 22:46:25.663753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0926 22:46:25.761586       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [62733570c49c55bded3a57d9d93052194a6a3821316c573e7ade576d15f2412c] <==
	I0926 22:45:40.794020       1 serving.go:386] Generated self-signed cert in-memory
	W0926 22:45:41.536751       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0926 22:45:41.536877       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0926 22:45:41.536904       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0926 22:45:41.536922       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0926 22:45:41.626506       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0926 22:45:41.626850       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:45:41.629089       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:45:41.629149       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:45:41.633382       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0926 22:45:41.633452       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0926 22:45:41.730122       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:46:04.181214       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0926 22:46:04.181479       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0926 22:46:04.182823       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0926 22:46:04.183288       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 26 22:56:19 functional-615476 kubelet[5910]: E0926 22:56:19.278952    5910 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-wvdjw_default(308a2350-8572-448a-aaa7-72edfa592090): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:56:19 functional-615476 kubelet[5910]: E0926 22:56:19.279040    5910 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-wvdjw" podUID="308a2350-8572-448a-aaa7-72edfa592090"
	Sep 26 22:56:21 functional-615476 kubelet[5910]: E0926 22:56:21.474121    5910 manager.go:1116] Failed to create existing container: /kubepods/burstable/podfee94ace-f9a5-4681-a86a-01d8b513d998/crio-c266b29dcb2814b1d471a28ba0f4532ac983e4ea0f702cef62c2d368a95b91cc: Error finding container c266b29dcb2814b1d471a28ba0f4532ac983e4ea0f702cef62c2d368a95b91cc: Status 404 returned error can't find the container with id c266b29dcb2814b1d471a28ba0f4532ac983e4ea0f702cef62c2d368a95b91cc
	Sep 26 22:56:21 functional-615476 kubelet[5910]: E0926 22:56:21.474753    5910 manager.go:1116] Failed to create existing container: /kubepods/burstable/poddae760326ef99aa8663cb2343716dfa8/crio-99fece67f7f5c3b026ecbbd00d306546e39853faf3f653d6dada27f6e440bbbe: Error finding container 99fece67f7f5c3b026ecbbd00d306546e39853faf3f653d6dada27f6e440bbbe: Status 404 returned error can't find the container with id 99fece67f7f5c3b026ecbbd00d306546e39853faf3f653d6dada27f6e440bbbe
	Sep 26 22:56:21 functional-615476 kubelet[5910]: E0926 22:56:21.475126    5910 manager.go:1116] Failed to create existing container: /kubepods/burstable/podea4d4941d03a88b7a16ab5be7b589633/crio-f24f250692d59846b41827ff3962036709944e117846612f085b738c4abe0f5f: Error finding container f24f250692d59846b41827ff3962036709944e117846612f085b738c4abe0f5f: Status 404 returned error can't find the container with id f24f250692d59846b41827ff3962036709944e117846612f085b738c4abe0f5f
	Sep 26 22:56:21 functional-615476 kubelet[5910]: E0926 22:56:21.475491    5910 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podc670ee02-4ecb-4f17-b779-1a64005c4259/crio-76c29541bf5066f8d2f1a447db42d573ba10f8f8f6cc3ed623bede46ecb76eaa: Error finding container 76c29541bf5066f8d2f1a447db42d573ba10f8f8f6cc3ed623bede46ecb76eaa: Status 404 returned error can't find the container with id 76c29541bf5066f8d2f1a447db42d573ba10f8f8f6cc3ed623bede46ecb76eaa
	Sep 26 22:56:21 functional-615476 kubelet[5910]: E0926 22:56:21.475954    5910 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod37d8ee67-d205-47e3-8b92-0c9f65478a89/crio-45797e1be27182a5b9185ee216bbf4062d88e73c976f12fb5a8d0b395982d385: Error finding container 45797e1be27182a5b9185ee216bbf4062d88e73c976f12fb5a8d0b395982d385: Status 404 returned error can't find the container with id 45797e1be27182a5b9185ee216bbf4062d88e73c976f12fb5a8d0b395982d385
	Sep 26 22:56:21 functional-615476 kubelet[5910]: E0926 22:56:21.476409    5910 manager.go:1116] Failed to create existing container: /kubepods/burstable/podae66e2d87889a042120fb5a5d085e38f/crio-e0500a9b3f70d0c05e9af787563d7f05d6a56df3c77c0cb7a125bc77d89e2b4f: Error finding container e0500a9b3f70d0c05e9af787563d7f05d6a56df3c77c0cb7a125bc77d89e2b4f: Status 404 returned error can't find the container with id e0500a9b3f70d0c05e9af787563d7f05d6a56df3c77c0cb7a125bc77d89e2b4f
	Sep 26 22:56:21 functional-615476 kubelet[5910]: E0926 22:56:21.665328    5910 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758927381664563577  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Sep 26 22:56:21 functional-615476 kubelet[5910]: E0926 22:56:21.665354    5910 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758927381664563577  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Sep 26 22:56:27 functional-615476 kubelet[5910]: E0926 22:56:27.413773    5910 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-vspp8" podUID="546709eb-f190-4013-8e4d-8441a5701947"
	Sep 26 22:56:31 functional-615476 kubelet[5910]: E0926 22:56:31.669679    5910 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758927391669429906  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Sep 26 22:56:31 functional-615476 kubelet[5910]: E0926 22:56:31.669719    5910 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758927391669429906  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Sep 26 22:56:33 functional-615476 kubelet[5910]: E0926 22:56:33.413403    5910 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-wvdjw" podUID="308a2350-8572-448a-aaa7-72edfa592090"
	Sep 26 22:56:41 functional-615476 kubelet[5910]: E0926 22:56:41.671459    5910 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758927401671148273  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Sep 26 22:56:41 functional-615476 kubelet[5910]: E0926 22:56:41.671487    5910 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758927401671148273  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Sep 26 22:56:42 functional-615476 kubelet[5910]: E0926 22:56:42.414319    5910 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-vspp8" podUID="546709eb-f190-4013-8e4d-8441a5701947"
	Sep 26 22:56:48 functional-615476 kubelet[5910]: E0926 22:56:48.414454    5910 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-wvdjw" podUID="308a2350-8572-448a-aaa7-72edfa592090"
	Sep 26 22:56:49 functional-615476 kubelet[5910]: E0926 22:56:49.371373    5910 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:56:49 functional-615476 kubelet[5910]: E0926 22:56:49.371430    5910 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:56:49 functional-615476 kubelet[5910]: E0926 22:56:49.371664    5910 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-qrvbk_kubernetes-dashboard(032d7cb7-7589-4435-9d70-2e690753035c): ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:56:49 functional-615476 kubelet[5910]: E0926 22:56:49.371703    5910 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-qrvbk" podUID="032d7cb7-7589-4435-9d70-2e690753035c"
	Sep 26 22:56:51 functional-615476 kubelet[5910]: E0926 22:56:51.673214    5910 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758927411672744973  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Sep 26 22:56:51 functional-615476 kubelet[5910]: E0926 22:56:51.673272    5910 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758927411672744973  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Sep 26 22:56:54 functional-615476 kubelet[5910]: E0926 22:56:54.414365    5910 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-vspp8" podUID="546709eb-f190-4013-8e4d-8441a5701947"
	
	
	==> storage-provisioner [7743b91a59da53d60306b364b523ea638a5b3771441525b3aa46e330d301b9cb] <==
	W0926 22:56:30.514526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:32.519377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:32.524931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:34.529077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:34.534398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:36.538559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:36.543932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:38.547469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:38.553810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:40.558041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:40.569934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:42.574057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:42.581146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:44.584313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:44.593454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:46.597368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:46.603307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:48.607474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:48.612635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:50.616587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:50.622250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:52.629527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:52.640542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:54.643852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:56:54.657899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e97af5255d2cf93fa9ba2d026e82ac01c1cfd9a81c9fa3e51ca579381e11d0fc] <==
	I0926 22:45:42.531236       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0926 22:45:42.556501       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0926 22:45:42.556688       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0926 22:45:42.561835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:45:46.017034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:45:50.277751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:45:53.876653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:45:56.930884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:45:59.954163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:45:59.966882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0926 22:45:59.967778       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0926 22:45:59.967956       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-615476_e06ef4ee-6afa-483e-84cc-9f0d688113b9!
	I0926 22:45:59.970261       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd30f26c-02dc-403e-bfcd-1fe01b513e27", APIVersion:"v1", ResourceVersion:"538", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-615476_e06ef4ee-6afa-483e-84cc-9f0d688113b9 became leader
	W0926 22:45:59.971547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:45:59.985474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0926 22:46:00.068875       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-615476_e06ef4ee-6afa-483e-84cc-9f0d688113b9!
	W0926 22:46:01.990146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:01.998217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:04.003251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:46:04.014253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-615476 -n functional-615476
helpers_test.go:269: (dbg) Run:  kubectl --context functional-615476 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-wvdjw hello-node-connect-7d85dfc575-vspp8 sp-pod dashboard-metrics-scraper-77bf4d6c4c-qrvbk kubernetes-dashboard-855c9754f9-6c4r4
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-615476 describe pod busybox-mount hello-node-75c85bcc94-wvdjw hello-node-connect-7d85dfc575-vspp8 sp-pod dashboard-metrics-scraper-77bf4d6c4c-qrvbk kubernetes-dashboard-855c9754f9-6c4r4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-615476 describe pod busybox-mount hello-node-75c85bcc94-wvdjw hello-node-connect-7d85dfc575-vspp8 sp-pod dashboard-metrics-scraper-77bf4d6c4c-qrvbk kubernetes-dashboard-855c9754f9-6c4r4: exit status 1 (97.800891ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-615476/192.168.39.253
	Start Time:       Fri, 26 Sep 2025 22:47:15 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://7b9c309a0de8e1d33c678eae39cdd78cf313b3b25ce81b9400dc8d7d189c1ee8
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 26 Sep 2025 22:49:05 +0000
	      Finished:     Fri, 26 Sep 2025 22:49:05 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2sxg9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-2sxg9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m40s  default-scheduler  Successfully assigned default/busybox-mount to functional-615476
	  Normal  Pulling    9m40s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     7m50s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.395s (1m49.818s including waiting). Image size: 4631262 bytes.
	  Normal  Created    7m50s  kubelet            Created container: mount-munger
	  Normal  Started    7m50s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-wvdjw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-615476/192.168.39.253
	Start Time:       Fri, 26 Sep 2025 22:46:52 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7xzxg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7xzxg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-75c85bcc94-wvdjw to functional-615476
	  Warning  Failed     3m51s (x3 over 8m52s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m58s (x4 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     36s (x4 over 8m52s)    kubelet            Error: ErrImagePull
	  Warning  Failed     36s                    kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    7s (x7 over 8m52s)     kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     7s (x7 over 8m52s)     kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-vspp8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-615476/192.168.39.253
	Start Time:       Fri, 26 Sep 2025 22:46:52 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4dx6r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4dx6r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-vspp8 to functional-615476
	  Warning  Failed     4m21s                kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m32s (x4 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     80s (x3 over 9m22s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     80s (x4 over 9m22s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x11 over 9m22s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     1s (x11 over 9m22s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-615476/192.168.39.253
	Start Time:       Fri, 26 Sep 2025 22:46:59 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvwr4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-vvwr4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  9m56s                 default-scheduler  Successfully assigned default/sp-pod to functional-615476
	  Warning  Failed     7m52s                 kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     111s (x3 over 7m52s)  kubelet            Error: ErrImagePull
	  Warning  Failed     111s (x2 over 5m5s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    73s (x5 over 7m52s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     73s (x5 over 7m52s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    58s (x4 over 9m52s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-qrvbk" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-6c4r4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-615476 describe pod busybox-mount hello-node-75c85bcc94-wvdjw hello-node-connect-7d85dfc575-vspp8 sp-pod dashboard-metrics-scraper-77bf4d6c4c-qrvbk kubernetes-dashboard-855c9754f9-6c4r4: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.38s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (370.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [c670ee02-4ecb-4f17-b779-1a64005c4259] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005901598s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-615476 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-615476 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-615476 get pvc myclaim -o=json
I0926 22:46:57.320711    9914 retry.go:31] will retry after 1.935885423s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:03bb9e9b-a846-44c2-ace6-8ee87058a9fb ResourceVersion:738 Generation:0 CreationTimestamp:2025-09-26 22:46:56 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc00184c1a0 VolumeMode:0xc00184c1b0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-615476 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-615476 apply -f testdata/storage-provisioner/pod.yaml
I0926 22:46:59.452484    9914 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [98ebeeb7-6702-49e0-ac46-af33f5ceabfe] Pending
helpers_test.go:352: "sp-pod" [98ebeeb7-6702-49e0-ac46-af33f5ceabfe] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-615476 -n functional-615476
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-26 22:52:59.742228119 +0000 UTC m=+1445.572627572
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-615476 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-615476 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-615476/192.168.39.253
Start Time:       Fri, 26 Sep 2025 22:46:59 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:  10.244.0.10
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvwr4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-vvwr4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/sp-pod to functional-615476
Warning  Failed     3m56s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     69s (x2 over 3m56s)  kubelet            Error: ErrImagePull
Warning  Failed     69s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    56s (x2 over 3m56s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     56s (x2 over 3m56s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    43s (x3 over 5m56s)  kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-615476 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-615476 logs sp-pod -n default: exit status 1 (72.891004ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-615476 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-615476 -n functional-615476
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-615476 logs -n 25: (1.617845862s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-615476 ssh -n functional-615476 sudo cat /tmp/does/not/exist/cp-test.txt                                               │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:46 UTC │ 26 Sep 25 22:46 UTC │
	│ addons    │ functional-615476 addons list                                                                                                     │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:46 UTC │ 26 Sep 25 22:46 UTC │
	│ addons    │ functional-615476 addons list -o json                                                                                             │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:46 UTC │ 26 Sep 25 22:46 UTC │
	│ mount     │ -p functional-615476 /tmp/TestFunctionalparallelMountCmdany-port2791596571/001:/mount-9p --alsologtostderr -v=1                   │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │                     │
	│ ssh       │ functional-615476 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │                     │
	│ ssh       │ functional-615476 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ ssh       │ functional-615476 ssh -- ls -la /mount-9p                                                                                         │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ ssh       │ functional-615476 ssh cat /mount-9p/test-1758926833697966459                                                                      │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:47 UTC │ 26 Sep 25 22:47 UTC │
	│ ssh       │ functional-615476 ssh stat /mount-9p/created-by-test                                                                              │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:49 UTC │ 26 Sep 25 22:49 UTC │
	│ ssh       │ functional-615476 ssh stat /mount-9p/created-by-pod                                                                               │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:49 UTC │ 26 Sep 25 22:49 UTC │
	│ ssh       │ functional-615476 ssh sudo umount -f /mount-9p                                                                                    │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:49 UTC │ 26 Sep 25 22:49 UTC │
	│ mount     │ -p functional-615476 /tmp/TestFunctionalparallelMountCmdspecific-port1566913751/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:49 UTC │                     │
	│ ssh       │ functional-615476 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:49 UTC │                     │
	│ ssh       │ functional-615476 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:49 UTC │ 26 Sep 25 22:49 UTC │
	│ ssh       │ functional-615476 ssh -- ls -la /mount-9p                                                                                         │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:49 UTC │ 26 Sep 25 22:49 UTC │
	│ ssh       │ functional-615476 ssh sudo umount -f /mount-9p                                                                                    │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:49 UTC │                     │
	│ mount     │ -p functional-615476 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1522187760/001:/mount3 --alsologtostderr -v=1                │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:49 UTC │                     │
	│ mount     │ -p functional-615476 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1522187760/001:/mount1 --alsologtostderr -v=1                │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:49 UTC │                     │
	│ ssh       │ functional-615476 ssh findmnt -T /mount1                                                                                          │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:49 UTC │                     │
	│ mount     │ -p functional-615476 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1522187760/001:/mount2 --alsologtostderr -v=1                │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:49 UTC │                     │
	│ ssh       │ functional-615476 ssh findmnt -T /mount1                                                                                          │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:49 UTC │ 26 Sep 25 22:49 UTC │
	│ ssh       │ functional-615476 ssh findmnt -T /mount2                                                                                          │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:49 UTC │ 26 Sep 25 22:49 UTC │
	│ ssh       │ functional-615476 ssh findmnt -T /mount3                                                                                          │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:49 UTC │ 26 Sep 25 22:49 UTC │
	│ mount     │ -p functional-615476 --kill=true                                                                                                  │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:49 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-615476 --alsologtostderr -v=1                                                                    │ functional-615476 │ jenkins │ v1.37.0 │ 26 Sep 25 22:49 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:46:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:46:02.894018   17294 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:46:02.894236   17294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:46:02.894239   17294 out.go:374] Setting ErrFile to fd 2...
	I0926 22:46:02.894241   17294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:46:02.894443   17294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	I0926 22:46:02.894881   17294 out.go:368] Setting JSON to false
	I0926 22:46:02.895656   17294 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1708,"bootTime":1758925055,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:46:02.895731   17294 start.go:140] virtualization: kvm guest
	I0926 22:46:02.897440   17294 out.go:179] * [functional-615476] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:46:02.898685   17294 notify.go:220] Checking for updates...
	I0926 22:46:02.898743   17294 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:46:02.899941   17294 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:46:02.901257   17294 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 22:46:02.902587   17294 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 22:46:02.903731   17294 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:46:02.904804   17294 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:46:02.906501   17294 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:46:02.906617   17294 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:46:02.907249   17294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:46:02.907297   17294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:46:02.920868   17294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37837
	I0926 22:46:02.921368   17294 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:46:02.921864   17294 main.go:141] libmachine: Using API Version  1
	I0926 22:46:02.921877   17294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:46:02.922286   17294 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:46:02.922473   17294 main.go:141] libmachine: (functional-615476) Calling .DriverName
	I0926 22:46:02.954901   17294 out.go:179] * Using the kvm2 driver based on existing profile
	I0926 22:46:02.955884   17294 start.go:304] selected driver: kvm2
	I0926 22:46:02.955891   17294 start.go:924] validating driver "kvm2" against &{Name:functional-615476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:functional-615476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:46:02.955998   17294 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:46:02.956313   17294 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 22:46:02.956373   17294 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 22:46:02.970352   17294 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 22:46:02.970383   17294 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 22:46:02.984250   17294 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 22:46:02.985004   17294 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 22:46:02.985032   17294 cni.go:84] Creating CNI manager for ""
	I0926 22:46:02.985082   17294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 22:46:02.985136   17294 start.go:348] cluster config:
	{Name:functional-615476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-615476 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:46:02.985241   17294 iso.go:125] acquiring lock: {Name:mk665cb8117fd96bfc46b1e5a29611848cf59d97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 22:46:02.986950   17294 out.go:179] * Starting "functional-615476" primary control-plane node in "functional-615476" cluster
	I0926 22:46:02.988114   17294 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 22:46:02.988150   17294 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0926 22:46:02.988157   17294 cache.go:58] Caching tarball of preloaded images
	I0926 22:46:02.988237   17294 preload.go:172] Found /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0926 22:46:02.988243   17294 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0926 22:46:02.988337   17294 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/config.json ...
	I0926 22:46:02.988548   17294 start.go:360] acquireMachinesLock for functional-615476: {Name:mk2abc374bcfc09d0b998f1b70bb443182c23d46 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 22:46:02.988591   17294 start.go:364] duration metric: took 30.055µs to acquireMachinesLock for "functional-615476"
	I0926 22:46:02.988608   17294 start.go:96] Skipping create...Using existing machine configuration
	I0926 22:46:02.988611   17294 fix.go:54] fixHost starting: 
	I0926 22:46:02.988881   17294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:46:02.988915   17294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:46:03.002645   17294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41399
	I0926 22:46:03.003118   17294 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:46:03.003580   17294 main.go:141] libmachine: Using API Version  1
	I0926 22:46:03.003600   17294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:46:03.004040   17294 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:46:03.004285   17294 main.go:141] libmachine: (functional-615476) Calling .DriverName
	I0926 22:46:03.004450   17294 main.go:141] libmachine: (functional-615476) Calling .GetState
	I0926 22:46:03.006394   17294 fix.go:112] recreateIfNeeded on functional-615476: state=Running err=<nil>
	W0926 22:46:03.006418   17294 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 22:46:03.008272   17294 out.go:252] * Updating the running kvm2 "functional-615476" VM ...
	I0926 22:46:03.008293   17294 machine.go:93] provisionDockerMachine start ...
	I0926 22:46:03.008308   17294 main.go:141] libmachine: (functional-615476) Calling .DriverName
	I0926 22:46:03.008517   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHHostname
	I0926 22:46:03.011648   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:03.012053   17294 main.go:141] libmachine: (functional-615476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:7e", ip: ""} in network mk-functional-615476: {Iface:virbr1 ExpiryTime:2025-09-26 23:44:07 +0000 UTC Type:0 Mac:52:54:00:a0:99:7e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:functional-615476 Clientid:01:52:54:00:a0:99:7e}
	I0926 22:46:03.012075   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined IP address 192.168.39.253 and MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:03.012321   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHPort
	I0926 22:46:03.012541   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
	I0926 22:46:03.012692   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
	I0926 22:46:03.012851   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHUsername
	I0926 22:46:03.013031   17294 main.go:141] libmachine: Using SSH client type: native
	I0926 22:46:03.013240   17294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0926 22:46:03.013245   17294 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 22:46:03.126033   17294 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-615476
	
	I0926 22:46:03.126052   17294 main.go:141] libmachine: (functional-615476) Calling .GetMachineName
	I0926 22:46:03.126302   17294 buildroot.go:166] provisioning hostname "functional-615476"
	I0926 22:46:03.126319   17294 main.go:141] libmachine: (functional-615476) Calling .GetMachineName
	I0926 22:46:03.126517   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHHostname
	I0926 22:46:03.129840   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:03.130265   17294 main.go:141] libmachine: (functional-615476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:7e", ip: ""} in network mk-functional-615476: {Iface:virbr1 ExpiryTime:2025-09-26 23:44:07 +0000 UTC Type:0 Mac:52:54:00:a0:99:7e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:functional-615476 Clientid:01:52:54:00:a0:99:7e}
	I0926 22:46:03.130289   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined IP address 192.168.39.253 and MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:03.130475   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHPort
	I0926 22:46:03.130677   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
	I0926 22:46:03.130818   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
	I0926 22:46:03.130993   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHUsername
	I0926 22:46:03.131158   17294 main.go:141] libmachine: Using SSH client type: native
	I0926 22:46:03.131352   17294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0926 22:46:03.131358   17294 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-615476 && echo "functional-615476" | sudo tee /etc/hostname
	I0926 22:46:03.267535   17294 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-615476
	
	I0926 22:46:03.267561   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHHostname
	I0926 22:46:03.270780   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:03.271169   17294 main.go:141] libmachine: (functional-615476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:7e", ip: ""} in network mk-functional-615476: {Iface:virbr1 ExpiryTime:2025-09-26 23:44:07 +0000 UTC Type:0 Mac:52:54:00:a0:99:7e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:functional-615476 Clientid:01:52:54:00:a0:99:7e}
	I0926 22:46:03.271207   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined IP address 192.168.39.253 and MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:03.271419   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHPort
	I0926 22:46:03.271595   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
	I0926 22:46:03.271753   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
	I0926 22:46:03.271859   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHUsername
	I0926 22:46:03.271999   17294 main.go:141] libmachine: Using SSH client type: native
	I0926 22:46:03.272256   17294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0926 22:46:03.272274   17294 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-615476' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-615476/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-615476' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 22:46:03.384497   17294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 22:46:03.384526   17294 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21642-6020/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-6020/.minikube}
	I0926 22:46:03.384569   17294 buildroot.go:174] setting up certificates
	I0926 22:46:03.384587   17294 provision.go:84] configureAuth start
	I0926 22:46:03.384614   17294 main.go:141] libmachine: (functional-615476) Calling .GetMachineName
	I0926 22:46:03.384950   17294 main.go:141] libmachine: (functional-615476) Calling .GetIP
	I0926 22:46:03.387930   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:03.388367   17294 main.go:141] libmachine: (functional-615476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:7e", ip: ""} in network mk-functional-615476: {Iface:virbr1 ExpiryTime:2025-09-26 23:44:07 +0000 UTC Type:0 Mac:52:54:00:a0:99:7e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:functional-615476 Clientid:01:52:54:00:a0:99:7e}
	I0926 22:46:03.388411   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined IP address 192.168.39.253 and MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:03.388527   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHHostname
	I0926 22:46:03.390866   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:03.391183   17294 main.go:141] libmachine: (functional-615476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:7e", ip: ""} in network mk-functional-615476: {Iface:virbr1 ExpiryTime:2025-09-26 23:44:07 +0000 UTC Type:0 Mac:52:54:00:a0:99:7e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:functional-615476 Clientid:01:52:54:00:a0:99:7e}
	I0926 22:46:03.391227   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined IP address 192.168.39.253 and MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:03.391388   17294 provision.go:143] copyHostCerts
	I0926 22:46:03.391454   17294 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem, removing ...
	I0926 22:46:03.391462   17294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem
	I0926 22:46:03.391536   17294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem (1082 bytes)
	I0926 22:46:03.391633   17294 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem, removing ...
	I0926 22:46:03.391637   17294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem
	I0926 22:46:03.391663   17294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem (1123 bytes)
	I0926 22:46:03.391718   17294 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem, removing ...
	I0926 22:46:03.391721   17294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem
	I0926 22:46:03.391741   17294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem (1675 bytes)
	I0926 22:46:03.391779   17294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem org=jenkins.functional-615476 san=[127.0.0.1 192.168.39.253 functional-615476 localhost minikube]
	I0926 22:46:03.809985   17294 provision.go:177] copyRemoteCerts
	I0926 22:46:03.810034   17294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 22:46:03.810055   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHHostname
	I0926 22:46:03.812922   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:03.813213   17294 main.go:141] libmachine: (functional-615476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:7e", ip: ""} in network mk-functional-615476: {Iface:virbr1 ExpiryTime:2025-09-26 23:44:07 +0000 UTC Type:0 Mac:52:54:00:a0:99:7e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:functional-615476 Clientid:01:52:54:00:a0:99:7e}
	I0926 22:46:03.813238   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined IP address 192.168.39.253 and MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:03.813464   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHPort
	I0926 22:46:03.813663   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
	I0926 22:46:03.813810   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHUsername
	I0926 22:46:03.813959   17294 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/functional-615476/id_rsa Username:docker}
	I0926 22:46:03.906392   17294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 22:46:03.940120   17294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0926 22:46:03.974127   17294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 22:46:04.010434   17294 provision.go:87] duration metric: took 625.833547ms to configureAuth
	I0926 22:46:04.010455   17294 buildroot.go:189] setting minikube options for container-runtime
	I0926 22:46:04.010620   17294 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:46:04.010693   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHHostname
	I0926 22:46:04.013673   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:04.014115   17294 main.go:141] libmachine: (functional-615476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:7e", ip: ""} in network mk-functional-615476: {Iface:virbr1 ExpiryTime:2025-09-26 23:44:07 +0000 UTC Type:0 Mac:52:54:00:a0:99:7e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:functional-615476 Clientid:01:52:54:00:a0:99:7e}
	I0926 22:46:04.014141   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined IP address 192.168.39.253 and MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:04.014405   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHPort
	I0926 22:46:04.014635   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
	I0926 22:46:04.014801   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
	I0926 22:46:04.014953   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHUsername
	I0926 22:46:04.015135   17294 main.go:141] libmachine: Using SSH client type: native
	I0926 22:46:04.015397   17294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0926 22:46:04.015412   17294 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0926 22:46:09.648998   17294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0926 22:46:09.649015   17294 machine.go:96] duration metric: took 6.640714573s to provisionDockerMachine
	I0926 22:46:09.649025   17294 start.go:293] postStartSetup for "functional-615476" (driver="kvm2")
	I0926 22:46:09.649034   17294 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 22:46:09.649051   17294 main.go:141] libmachine: (functional-615476) Calling .DriverName
	I0926 22:46:09.649404   17294 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 22:46:09.649428   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHHostname
	I0926 22:46:09.652429   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:09.652989   17294 main.go:141] libmachine: (functional-615476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:7e", ip: ""} in network mk-functional-615476: {Iface:virbr1 ExpiryTime:2025-09-26 23:44:07 +0000 UTC Type:0 Mac:52:54:00:a0:99:7e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:functional-615476 Clientid:01:52:54:00:a0:99:7e}
	I0926 22:46:09.653005   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined IP address 192.168.39.253 and MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:09.653330   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHPort
	I0926 22:46:09.653507   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
	I0926 22:46:09.653701   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHUsername
	I0926 22:46:09.653848   17294 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/functional-615476/id_rsa Username:docker}
	I0926 22:46:09.741601   17294 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 22:46:09.747011   17294 info.go:137] Remote host: Buildroot 2025.02
	I0926 22:46:09.747027   17294 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-6020/.minikube/addons for local assets ...
	I0926 22:46:09.747090   17294 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-6020/.minikube/files for local assets ...
	I0926 22:46:09.747186   17294 filesync.go:149] local asset: /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem -> 99142.pem in /etc/ssl/certs
	I0926 22:46:09.747263   17294 filesync.go:149] local asset: /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/test/nested/copy/9914/hosts -> hosts in /etc/test/nested/copy/9914
	I0926 22:46:09.747301   17294 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9914
	I0926 22:46:09.759623   17294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem --> /etc/ssl/certs/99142.pem (1708 bytes)
	I0926 22:46:09.795415   17294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/test/nested/copy/9914/hosts --> /etc/test/nested/copy/9914/hosts (40 bytes)
	I0926 22:46:09.828611   17294 start.go:296] duration metric: took 179.572706ms for postStartSetup
	I0926 22:46:09.828639   17294 fix.go:56] duration metric: took 6.840027475s for fixHost
	I0926 22:46:09.828657   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHHostname
	I0926 22:46:09.831664   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:09.832009   17294 main.go:141] libmachine: (functional-615476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:7e", ip: ""} in network mk-functional-615476: {Iface:virbr1 ExpiryTime:2025-09-26 23:44:07 +0000 UTC Type:0 Mac:52:54:00:a0:99:7e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:functional-615476 Clientid:01:52:54:00:a0:99:7e}
	I0926 22:46:09.832031   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined IP address 192.168.39.253 and MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:09.832211   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHPort
	I0926 22:46:09.832449   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
	I0926 22:46:09.832606   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
	I0926 22:46:09.832765   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHUsername
	I0926 22:46:09.832948   17294 main.go:141] libmachine: Using SSH client type: native
	I0926 22:46:09.833138   17294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0926 22:46:09.833143   17294 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 22:46:09.945626   17294 main.go:141] libmachine: SSH cmd err, output: <nil>: 1758926769.942114321
	
	I0926 22:46:09.945644   17294 fix.go:216] guest clock: 1758926769.942114321
	I0926 22:46:09.945650   17294 fix.go:229] Guest: 2025-09-26 22:46:09.942114321 +0000 UTC Remote: 2025-09-26 22:46:09.828641256 +0000 UTC m=+6.969547708 (delta=113.473065ms)
	I0926 22:46:09.945688   17294 fix.go:200] guest clock delta is within tolerance: 113.473065ms
	I0926 22:46:09.945693   17294 start.go:83] releasing machines lock for "functional-615476", held for 6.957096457s
	I0926 22:46:09.945714   17294 main.go:141] libmachine: (functional-615476) Calling .DriverName
	I0926 22:46:09.946046   17294 main.go:141] libmachine: (functional-615476) Calling .GetIP
	I0926 22:46:09.949305   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:09.949649   17294 main.go:141] libmachine: (functional-615476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:7e", ip: ""} in network mk-functional-615476: {Iface:virbr1 ExpiryTime:2025-09-26 23:44:07 +0000 UTC Type:0 Mac:52:54:00:a0:99:7e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:functional-615476 Clientid:01:52:54:00:a0:99:7e}
	I0926 22:46:09.949665   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined IP address 192.168.39.253 and MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:09.949807   17294 main.go:141] libmachine: (functional-615476) Calling .DriverName
	I0926 22:46:09.950302   17294 main.go:141] libmachine: (functional-615476) Calling .DriverName
	I0926 22:46:09.950455   17294 main.go:141] libmachine: (functional-615476) Calling .DriverName
	I0926 22:46:09.950538   17294 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 22:46:09.950586   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHHostname
	I0926 22:46:09.950700   17294 ssh_runner.go:195] Run: cat /version.json
	I0926 22:46:09.950715   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHHostname
	I0926 22:46:09.953632   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:09.953893   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:09.954046   17294 main.go:141] libmachine: (functional-615476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:7e", ip: ""} in network mk-functional-615476: {Iface:virbr1 ExpiryTime:2025-09-26 23:44:07 +0000 UTC Type:0 Mac:52:54:00:a0:99:7e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:functional-615476 Clientid:01:52:54:00:a0:99:7e}
	I0926 22:46:09.954067   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined IP address 192.168.39.253 and MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:09.954247   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHPort
	I0926 22:46:09.954408   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
	I0926 22:46:09.954419   17294 main.go:141] libmachine: (functional-615476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:7e", ip: ""} in network mk-functional-615476: {Iface:virbr1 ExpiryTime:2025-09-26 23:44:07 +0000 UTC Type:0 Mac:52:54:00:a0:99:7e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:functional-615476 Clientid:01:52:54:00:a0:99:7e}
	I0926 22:46:09.954433   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined IP address 192.168.39.253 and MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:09.954551   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHUsername
	I0926 22:46:09.954620   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHPort
	I0926 22:46:09.954716   17294 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/functional-615476/id_rsa Username:docker}
	I0926 22:46:09.954793   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
	I0926 22:46:09.954942   17294 main.go:141] libmachine: (functional-615476) Calling .GetSSHUsername
	I0926 22:46:09.955101   17294 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/functional-615476/id_rsa Username:docker}
	I0926 22:46:10.037212   17294 ssh_runner.go:195] Run: systemctl --version
	I0926 22:46:10.059540   17294 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0926 22:46:10.204686   17294 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 22:46:10.212170   17294 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 22:46:10.212218   17294 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 22:46:10.224601   17294 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0926 22:46:10.224615   17294 start.go:495] detecting cgroup driver to use...
	I0926 22:46:10.224669   17294 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 22:46:10.246333   17294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 22:46:10.265034   17294 docker.go:218] disabling cri-docker service (if available) ...
	I0926 22:46:10.265096   17294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0926 22:46:10.285137   17294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0926 22:46:10.302511   17294 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0926 22:46:10.497214   17294 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0926 22:46:10.673050   17294 docker.go:234] disabling docker service ...
	I0926 22:46:10.673124   17294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0926 22:46:10.703987   17294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0926 22:46:10.720695   17294 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0926 22:46:10.906558   17294 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0926 22:46:11.080645   17294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 22:46:11.101407   17294 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 22:46:11.128786   17294 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0926 22:46:11.128845   17294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:46:11.143849   17294 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0926 22:46:11.143916   17294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:46:11.158519   17294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:46:11.172809   17294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:46:11.187489   17294 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 22:46:11.203365   17294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:46:11.217373   17294 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:46:11.233059   17294 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:46:11.247788   17294 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 22:46:11.259969   17294 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 22:46:11.272811   17294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:46:11.450624   17294 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0926 22:46:18.478454   17294 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.027806993s)
	I0926 22:46:18.478473   17294 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0926 22:46:18.478539   17294 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0926 22:46:18.484844   17294 start.go:563] Will wait 60s for crictl version
	I0926 22:46:18.484902   17294 ssh_runner.go:195] Run: which crictl
	I0926 22:46:18.489557   17294 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 22:46:18.529197   17294 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0926 22:46:18.529281   17294 ssh_runner.go:195] Run: crio --version
	I0926 22:46:18.561246   17294 ssh_runner.go:195] Run: crio --version
	I0926 22:46:18.599812   17294 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0926 22:46:18.604310   17294 main.go:141] libmachine: (functional-615476) Calling .GetIP
	I0926 22:46:18.607532   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:18.607958   17294 main.go:141] libmachine: (functional-615476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:7e", ip: ""} in network mk-functional-615476: {Iface:virbr1 ExpiryTime:2025-09-26 23:44:07 +0000 UTC Type:0 Mac:52:54:00:a0:99:7e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:functional-615476 Clientid:01:52:54:00:a0:99:7e}
	I0926 22:46:18.607977   17294 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined IP address 192.168.39.253 and MAC address 52:54:00:a0:99:7e in network mk-functional-615476
	I0926 22:46:18.608342   17294 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0926 22:46:18.615143   17294 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0926 22:46:18.616211   17294 kubeadm.go:883] updating cluster {Name:functional-615476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functi
onal-615476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 22:46:18.616302   17294 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 22:46:18.616350   17294 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 22:46:18.663769   17294 crio.go:514] all images are preloaded for cri-o runtime.
	I0926 22:46:18.663783   17294 crio.go:433] Images already preloaded, skipping extraction
	I0926 22:46:18.663871   17294 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 22:46:18.706454   17294 crio.go:514] all images are preloaded for cri-o runtime.
	I0926 22:46:18.706464   17294 cache_images.go:85] Images are preloaded, skipping loading
	I0926 22:46:18.706469   17294 kubeadm.go:934] updating node { 192.168.39.253 8441 v1.34.0 crio true true} ...
	I0926 22:46:18.706547   17294 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-615476 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:functional-615476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 22:46:18.706606   17294 ssh_runner.go:195] Run: crio config
	I0926 22:46:18.754962   17294 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0926 22:46:18.754979   17294 cni.go:84] Creating CNI manager for ""
	I0926 22:46:18.754990   17294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 22:46:18.755001   17294 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 22:46:18.755020   17294 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.253 APIServerPort:8441 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-615476 NodeName:functional-615476 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigO
pts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 22:46:18.755125   17294 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.253
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-615476"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.253"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.253"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 22:46:18.755176   17294 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0926 22:46:18.768207   17294 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 22:46:18.768259   17294 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 22:46:18.780734   17294 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0926 22:46:18.803531   17294 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 22:46:18.825695   17294 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
	I0926 22:46:18.847681   17294 ssh_runner.go:195] Run: grep 192.168.39.253	control-plane.minikube.internal$ /etc/hosts
	I0926 22:46:18.852342   17294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:46:19.024436   17294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 22:46:19.043793   17294 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476 for IP: 192.168.39.253
	I0926 22:46:19.043806   17294 certs.go:195] generating shared ca certs ...
	I0926 22:46:19.043844   17294 certs.go:227] acquiring lock for ca certs: {Name:mk9e164f84dd227cf84a459eec91beae2bb75a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:46:19.043995   17294 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key
	I0926 22:46:19.044037   17294 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key
	I0926 22:46:19.044045   17294 certs.go:257] generating profile certs ...
	I0926 22:46:19.044118   17294 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.key
	I0926 22:46:19.044157   17294 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/apiserver.key.173b3659
	I0926 22:46:19.044195   17294 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/proxy-client.key
	I0926 22:46:19.044289   17294 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914.pem (1338 bytes)
	W0926 22:46:19.044314   17294 certs.go:480] ignoring /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914_empty.pem, impossibly tiny 0 bytes
	I0926 22:46:19.044320   17294 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem (1679 bytes)
	I0926 22:46:19.044338   17294 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem (1082 bytes)
	I0926 22:46:19.044355   17294 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem (1123 bytes)
	I0926 22:46:19.044371   17294 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem (1675 bytes)
	I0926 22:46:19.044402   17294 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem (1708 bytes)
	I0926 22:46:19.044916   17294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 22:46:19.078513   17294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0926 22:46:19.110846   17294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 22:46:19.143751   17294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 22:46:19.177584   17294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0926 22:46:19.210309   17294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 22:46:19.241976   17294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 22:46:19.277417   17294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 22:46:19.310552   17294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914.pem --> /usr/share/ca-certificates/9914.pem (1338 bytes)
	I0926 22:46:19.342687   17294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem --> /usr/share/ca-certificates/99142.pem (1708 bytes)
	I0926 22:46:19.375316   17294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 22:46:19.408009   17294 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 22:46:19.430022   17294 ssh_runner.go:195] Run: openssl version
	I0926 22:46:19.437204   17294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9914.pem && ln -fs /usr/share/ca-certificates/9914.pem /etc/ssl/certs/9914.pem"
	I0926 22:46:19.451122   17294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9914.pem
	I0926 22:46:19.457751   17294 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 26 22:43 /usr/share/ca-certificates/9914.pem
	I0926 22:46:19.457843   17294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9914.pem
	I0926 22:46:19.465663   17294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9914.pem /etc/ssl/certs/51391683.0"
	I0926 22:46:19.478003   17294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99142.pem && ln -fs /usr/share/ca-certificates/99142.pem /etc/ssl/certs/99142.pem"
	I0926 22:46:19.492477   17294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99142.pem
	I0926 22:46:19.498895   17294 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 26 22:43 /usr/share/ca-certificates/99142.pem
	I0926 22:46:19.498948   17294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99142.pem
	I0926 22:46:19.507587   17294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99142.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 22:46:19.519994   17294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 22:46:19.534504   17294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:46:19.540682   17294 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:46:19.540737   17294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:46:19.548781   17294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 22:46:19.561444   17294 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 22:46:19.567893   17294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0926 22:46:19.575857   17294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0926 22:46:19.583747   17294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0926 22:46:19.591744   17294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0926 22:46:19.599748   17294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0926 22:46:19.607659   17294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0926 22:46:19.615253   17294 kubeadm.go:400] StartCluster: {Name:functional-615476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functiona
l-615476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:46:19.615316   17294 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0926 22:46:19.615359   17294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0926 22:46:19.658158   17294 cri.go:89] found id: "dd8f1f9dd1b05ab7eb86c295da804c9125c5974695d3102c23d5f5ce56764b27"
	I0926 22:46:19.658169   17294 cri.go:89] found id: "e97af5255d2cf93fa9ba2d026e82ac01c1cfd9a81c9fa3e51ca579381e11d0fc"
	I0926 22:46:19.658172   17294 cri.go:89] found id: "e43212efa032d59d887115cb52dfa89966099ef1da46f83cc8987c164267fd18"
	I0926 22:46:19.658174   17294 cri.go:89] found id: "f49d09c8b58317559bcfb21e8180cfd2763a46ae267b78e3e4b5678a35e180e2"
	I0926 22:46:19.658176   17294 cri.go:89] found id: "62733570c49c55bded3a57d9d93052194a6a3821316c573e7ade576d15f2412c"
	I0926 22:46:19.658178   17294 cri.go:89] found id: "aff4aeaddfb9152a5d1d1fd759d48f102a7087782eef247013eed724d37e5ebf"
	I0926 22:46:19.658179   17294 cri.go:89] found id: "1fb203aad7a208ee61f02ad4aa37585620a273c95cadced2502ff729a2586ef7"
	I0926 22:46:19.658182   17294 cri.go:89] found id: "bdcb3c19204f2a8faabeed2c4933f8899627d59fa61c860d954696770eb0bee9"
	I0926 22:46:19.658184   17294 cri.go:89] found id: "c7f99c75ceb4ffe51e76d2a9fbe4f2089b32543a48ec13997b7e89e5ca5aac33"
	I0926 22:46:19.658189   17294 cri.go:89] found id: "7b1060d5f77e609e806e8527c45b5e0812ac235f5d3ece90ed2c94b551243275"
	I0926 22:46:19.658191   17294 cri.go:89] found id: "13dd47828dd7413e562e1d08db7382c00a2df92ed398c1f4b6d036e44daad1e0"
	I0926 22:46:19.658192   17294 cri.go:89] found id: "c53e5ab82387ec5cfdad03b5ce5026a359ae4615b9290e25ad8ebb8ea129bd79"
	I0926 22:46:19.658194   17294 cri.go:89] found id: "99792e63c911517b220ac17d2ce358a5465fc8d16c60901b2f1895c2467b8462"
	I0926 22:46:19.658195   17294 cri.go:89] found id: "38ca4b4fefd951274d85dd2e399d064929a12107f9725ce5358ab603fee42a3a"
	I0926 22:46:19.658197   17294 cri.go:89] found id: ""
	I0926 22:46:19.658234   17294 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-615476 -n functional-615476
helpers_test.go:269: (dbg) Run:  kubectl --context functional-615476 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-wvdjw hello-node-connect-7d85dfc575-vspp8 sp-pod dashboard-metrics-scraper-77bf4d6c4c-qrvbk kubernetes-dashboard-855c9754f9-6c4r4
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-615476 describe pod busybox-mount hello-node-75c85bcc94-wvdjw hello-node-connect-7d85dfc575-vspp8 sp-pod dashboard-metrics-scraper-77bf4d6c4c-qrvbk kubernetes-dashboard-855c9754f9-6c4r4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-615476 describe pod busybox-mount hello-node-75c85bcc94-wvdjw hello-node-connect-7d85dfc575-vspp8 sp-pod dashboard-metrics-scraper-77bf4d6c4c-qrvbk kubernetes-dashboard-855c9754f9-6c4r4: exit status 1 (99.289873ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-615476/192.168.39.253
	Start Time:       Fri, 26 Sep 2025 22:47:15 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://7b9c309a0de8e1d33c678eae39cdd78cf313b3b25ce81b9400dc8d7d189c1ee8
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 26 Sep 2025 22:49:05 +0000
	      Finished:     Fri, 26 Sep 2025 22:49:05 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2sxg9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-2sxg9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m46s  default-scheduler  Successfully assigned default/busybox-mount to functional-615476
	  Normal  Pulling    5m47s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m57s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.395s (1m49.818s including waiting). Image size: 4631262 bytes.
	  Normal  Created    3m57s  kubelet            Created container: mount-munger
	  Normal  Started    3m57s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-wvdjw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-615476/192.168.39.253
	Start Time:       Fri, 26 Sep 2025 22:46:52 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7xzxg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7xzxg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m9s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-wvdjw to functional-615476
	  Warning  Failed     2m57s (x2 over 4m59s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m57s (x2 over 4m59s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    2m42s (x2 over 4m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2m42s (x2 over 4m59s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m28s (x3 over 6m8s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-vspp8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-615476/192.168.39.253
	Start Time:       Fri, 26 Sep 2025 22:46:52 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4dx6r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4dx6r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m9s                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-vspp8 to functional-615476
	  Warning  Failed     3m27s (x2 over 5m29s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m4s (x3 over 6m8s)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     28s (x3 over 5m29s)    kubelet            Error: ErrImagePull
	  Warning  Failed     28s                    kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4s (x4 over 5m29s)     kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4s (x4 over 5m29s)     kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-615476/192.168.39.253
	Start Time:       Fri, 26 Sep 2025 22:46:59 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvwr4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-vvwr4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-615476
	  Warning  Failed     3m59s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     72s (x2 over 3m59s)  kubelet            Error: ErrImagePull
	  Warning  Failed     72s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    59s (x2 over 3m59s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     59s (x2 over 3m59s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    46s (x3 over 5m59s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-qrvbk" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-6c4r4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-615476 describe pod busybox-mount hello-node-75c85bcc94-wvdjw hello-node-connect-7d85dfc575-vspp8 sp-pod dashboard-metrics-scraper-77bf4d6c4c-qrvbk kubernetes-dashboard-855c9754f9-6c4r4: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (370.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-615476 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-615476 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-wvdjw" [308a2350-8572-448a-aaa7-72edfa592090] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E0926 22:46:53.466874    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-615476 -n functional-615476
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-26 22:56:53.094212783 +0000 UTC m=+1678.924612235
functional_test.go:1460: (dbg) Run:  kubectl --context functional-615476 describe po hello-node-75c85bcc94-wvdjw -n default
functional_test.go:1460: (dbg) kubectl --context functional-615476 describe po hello-node-75c85bcc94-wvdjw -n default:
Name:             hello-node-75c85bcc94-wvdjw
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-615476/192.168.39.253
Start Time:       Fri, 26 Sep 2025 22:46:52 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7xzxg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7xzxg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-75c85bcc94-wvdjw to functional-615476
Warning  Failed     3m49s (x3 over 8m50s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m56s (x4 over 9m59s)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     34s (x4 over 8m50s)    kubelet            Error: ErrImagePull
Warning  Failed     34s                    kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    5s (x7 over 8m50s)     kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     5s (x7 over 8m50s)     kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-615476 logs hello-node-75c85bcc94-wvdjw -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-615476 logs hello-node-75c85bcc94-wvdjw -n default: exit status 1 (75.635097ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-wvdjw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-615476 logs hello-node-75c85bcc94-wvdjw -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-615476 service --namespace=default --https --url hello-node: exit status 115 (298.576629ms)

                                                
                                                
-- stdout --
	https://192.168.39.253:30979
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-615476 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-615476 service hello-node --url --format={{.IP}}: exit status 115 (300.210337ms)

                                                
                                                
-- stdout --
	192.168.39.253
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-615476 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-615476 service hello-node --url: exit status 115 (306.956923ms)

                                                
                                                
-- stdout --
	http://192.168.39.253:30979
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-615476 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.253:30979
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestPreload (128.39s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-627811 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
E0926 23:36:16.041716    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-627811 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m6.835332592s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-627811 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-627811 image pull gcr.io/k8s-minikube/busybox: (2.534817042s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-627811
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-627811: (6.9754559s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-627811 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0926 23:36:32.969056    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:36:51.007452    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-627811 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (49.007197976s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-627811 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-09-26 23:37:18.797474167 +0000 UTC m=+4104.627873620
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-627811 -n test-preload-627811
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-627811 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-627811 logs -n 25: (1.165313216s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-703869 ssh -n multinode-703869-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-703869     │ jenkins │ v1.37.0 │ 26 Sep 25 23:24 UTC │ 26 Sep 25 23:24 UTC │
	│ ssh     │ multinode-703869 ssh -n multinode-703869 sudo cat /home/docker/cp-test_multinode-703869-m03_multinode-703869.txt                                                                    │ multinode-703869     │ jenkins │ v1.37.0 │ 26 Sep 25 23:24 UTC │ 26 Sep 25 23:24 UTC │
	│ cp      │ multinode-703869 cp multinode-703869-m03:/home/docker/cp-test.txt multinode-703869-m02:/home/docker/cp-test_multinode-703869-m03_multinode-703869-m02.txt                           │ multinode-703869     │ jenkins │ v1.37.0 │ 26 Sep 25 23:24 UTC │ 26 Sep 25 23:24 UTC │
	│ ssh     │ multinode-703869 ssh -n multinode-703869-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-703869     │ jenkins │ v1.37.0 │ 26 Sep 25 23:24 UTC │ 26 Sep 25 23:24 UTC │
	│ ssh     │ multinode-703869 ssh -n multinode-703869-m02 sudo cat /home/docker/cp-test_multinode-703869-m03_multinode-703869-m02.txt                                                            │ multinode-703869     │ jenkins │ v1.37.0 │ 26 Sep 25 23:24 UTC │ 26 Sep 25 23:24 UTC │
	│ node    │ multinode-703869 node stop m03                                                                                                                                                      │ multinode-703869     │ jenkins │ v1.37.0 │ 26 Sep 25 23:24 UTC │ 26 Sep 25 23:24 UTC │
	│ node    │ multinode-703869 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-703869     │ jenkins │ v1.37.0 │ 26 Sep 25 23:24 UTC │ 26 Sep 25 23:25 UTC │
	│ node    │ list -p multinode-703869                                                                                                                                                            │ multinode-703869     │ jenkins │ v1.37.0 │ 26 Sep 25 23:25 UTC │                     │
	│ stop    │ -p multinode-703869                                                                                                                                                                 │ multinode-703869     │ jenkins │ v1.37.0 │ 26 Sep 25 23:25 UTC │ 26 Sep 25 23:28 UTC │
	│ start   │ -p multinode-703869 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-703869     │ jenkins │ v1.37.0 │ 26 Sep 25 23:28 UTC │ 26 Sep 25 23:30 UTC │
	│ node    │ list -p multinode-703869                                                                                                                                                            │ multinode-703869     │ jenkins │ v1.37.0 │ 26 Sep 25 23:30 UTC │                     │
	│ node    │ multinode-703869 node delete m03                                                                                                                                                    │ multinode-703869     │ jenkins │ v1.37.0 │ 26 Sep 25 23:30 UTC │ 26 Sep 25 23:30 UTC │
	│ stop    │ multinode-703869 stop                                                                                                                                                               │ multinode-703869     │ jenkins │ v1.37.0 │ 26 Sep 25 23:30 UTC │ 26 Sep 25 23:33 UTC │
	│ start   │ -p multinode-703869 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-703869     │ jenkins │ v1.37.0 │ 26 Sep 25 23:33 UTC │ 26 Sep 25 23:34 UTC │
	│ node    │ list -p multinode-703869                                                                                                                                                            │ multinode-703869     │ jenkins │ v1.37.0 │ 26 Sep 25 23:34 UTC │                     │
	│ start   │ -p multinode-703869-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-703869-m02 │ jenkins │ v1.37.0 │ 26 Sep 25 23:34 UTC │                     │
	│ start   │ -p multinode-703869-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-703869-m03 │ jenkins │ v1.37.0 │ 26 Sep 25 23:34 UTC │ 26 Sep 25 23:35 UTC │
	│ node    │ add -p multinode-703869                                                                                                                                                             │ multinode-703869     │ jenkins │ v1.37.0 │ 26 Sep 25 23:35 UTC │                     │
	│ delete  │ -p multinode-703869-m03                                                                                                                                                             │ multinode-703869-m03 │ jenkins │ v1.37.0 │ 26 Sep 25 23:35 UTC │ 26 Sep 25 23:35 UTC │
	│ delete  │ -p multinode-703869                                                                                                                                                                 │ multinode-703869     │ jenkins │ v1.37.0 │ 26 Sep 25 23:35 UTC │ 26 Sep 25 23:35 UTC │
	│ start   │ -p test-preload-627811 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-627811  │ jenkins │ v1.37.0 │ 26 Sep 25 23:35 UTC │ 26 Sep 25 23:36 UTC │
	│ image   │ test-preload-627811 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-627811  │ jenkins │ v1.37.0 │ 26 Sep 25 23:36 UTC │ 26 Sep 25 23:36 UTC │
	│ stop    │ -p test-preload-627811                                                                                                                                                              │ test-preload-627811  │ jenkins │ v1.37.0 │ 26 Sep 25 23:36 UTC │ 26 Sep 25 23:36 UTC │
	│ start   │ -p test-preload-627811 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-627811  │ jenkins │ v1.37.0 │ 26 Sep 25 23:36 UTC │ 26 Sep 25 23:37 UTC │
	│ image   │ test-preload-627811 image list                                                                                                                                                      │ test-preload-627811  │ jenkins │ v1.37.0 │ 26 Sep 25 23:37 UTC │ 26 Sep 25 23:37 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 23:36:29
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 23:36:29.607813   44052 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:36:29.608090   44052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:36:29.608100   44052 out.go:374] Setting ErrFile to fd 2...
	I0926 23:36:29.608105   44052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:36:29.608284   44052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	I0926 23:36:29.608734   44052 out.go:368] Setting JSON to false
	I0926 23:36:29.609576   44052 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4735,"bootTime":1758925055,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 23:36:29.609661   44052 start.go:140] virtualization: kvm guest
	I0926 23:36:29.611741   44052 out.go:179] * [test-preload-627811] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 23:36:29.613289   44052 notify.go:220] Checking for updates...
	I0926 23:36:29.613345   44052 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 23:36:29.614774   44052 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 23:36:29.616347   44052 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 23:36:29.617797   44052 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 23:36:29.619286   44052 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 23:36:29.620636   44052 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 23:36:29.622521   44052 config.go:182] Loaded profile config "test-preload-627811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0926 23:36:29.623128   44052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:36:29.623208   44052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:36:29.637188   44052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46275
	I0926 23:36:29.637662   44052 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:36:29.638230   44052 main.go:141] libmachine: Using API Version  1
	I0926 23:36:29.638256   44052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:36:29.638646   44052 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:36:29.638887   44052 main.go:141] libmachine: (test-preload-627811) Calling .DriverName
	I0926 23:36:29.640655   44052 out.go:179] * Kubernetes 1.34.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.0
	I0926 23:36:29.641969   44052 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 23:36:29.642301   44052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:36:29.642363   44052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:36:29.655691   44052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39073
	I0926 23:36:29.656090   44052 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:36:29.656499   44052 main.go:141] libmachine: Using API Version  1
	I0926 23:36:29.656518   44052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:36:29.656883   44052 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:36:29.657127   44052 main.go:141] libmachine: (test-preload-627811) Calling .DriverName
	I0926 23:36:29.691262   44052 out.go:179] * Using the kvm2 driver based on existing profile
	I0926 23:36:29.692533   44052 start.go:304] selected driver: kvm2
	I0926 23:36:29.692564   44052 start.go:924] validating driver "kvm2" against &{Name:test-preload-627811 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 C
lusterName:test-preload-627811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 23:36:29.692682   44052 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 23:36:29.693398   44052 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 23:36:29.693487   44052 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 23:36:29.706962   44052 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 23:36:29.706991   44052 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 23:36:29.720739   44052 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 23:36:29.721166   44052 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 23:36:29.721211   44052 cni.go:84] Creating CNI manager for ""
	I0926 23:36:29.721263   44052 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 23:36:29.721336   44052 start.go:348] cluster config:
	{Name:test-preload-627811 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-627811 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 23:36:29.721464   44052 iso.go:125] acquiring lock: {Name:mk665cb8117fd96bfc46b1e5a29611848cf59d97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 23:36:29.724378   44052 out.go:179] * Starting "test-preload-627811" primary control-plane node in "test-preload-627811" cluster
	I0926 23:36:29.726009   44052 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0926 23:36:29.742305   44052 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0926 23:36:29.742335   44052 cache.go:58] Caching tarball of preloaded images
	I0926 23:36:29.742496   44052 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0926 23:36:29.744342   44052 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I0926 23:36:29.745703   44052 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0926 23:36:29.770750   44052 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0926 23:36:32.213461   44052 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0926 23:36:32.213555   44052 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0926 23:36:32.948843   44052 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0926 23:36:32.948957   44052 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/test-preload-627811/config.json ...
	I0926 23:36:32.949223   44052 start.go:360] acquireMachinesLock for test-preload-627811: {Name:mk2abc374bcfc09d0b998f1b70bb443182c23d46 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 23:36:32.949287   44052 start.go:364] duration metric: took 42.8µs to acquireMachinesLock for "test-preload-627811"
	I0926 23:36:32.949305   44052 start.go:96] Skipping create...Using existing machine configuration
	I0926 23:36:32.949309   44052 fix.go:54] fixHost starting: 
	I0926 23:36:32.949569   44052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:36:32.949610   44052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:36:32.962981   44052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45629
	I0926 23:36:32.963426   44052 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:36:32.963907   44052 main.go:141] libmachine: Using API Version  1
	I0926 23:36:32.963933   44052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:36:32.964294   44052 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:36:32.964503   44052 main.go:141] libmachine: (test-preload-627811) Calling .DriverName
	I0926 23:36:32.964647   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetState
	I0926 23:36:32.966527   44052 fix.go:112] recreateIfNeeded on test-preload-627811: state=Stopped err=<nil>
	I0926 23:36:32.966558   44052 main.go:141] libmachine: (test-preload-627811) Calling .DriverName
	W0926 23:36:32.966770   44052 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 23:36:32.968902   44052 out.go:252] * Restarting existing kvm2 VM for "test-preload-627811" ...
	I0926 23:36:32.968928   44052 main.go:141] libmachine: (test-preload-627811) Calling .Start
	I0926 23:36:32.969073   44052 main.go:141] libmachine: (test-preload-627811) starting domain...
	I0926 23:36:32.969095   44052 main.go:141] libmachine: (test-preload-627811) ensuring networks are active...
	I0926 23:36:32.969958   44052 main.go:141] libmachine: (test-preload-627811) Ensuring network default is active
	I0926 23:36:32.970418   44052 main.go:141] libmachine: (test-preload-627811) Ensuring network mk-test-preload-627811 is active
	I0926 23:36:32.971141   44052 main.go:141] libmachine: (test-preload-627811) getting domain XML...
	I0926 23:36:32.972337   44052 main.go:141] libmachine: (test-preload-627811) DBG | starting domain XML:
	I0926 23:36:32.972356   44052 main.go:141] libmachine: (test-preload-627811) DBG | <domain type='kvm'>
	I0926 23:36:32.972363   44052 main.go:141] libmachine: (test-preload-627811) DBG |   <name>test-preload-627811</name>
	I0926 23:36:32.972374   44052 main.go:141] libmachine: (test-preload-627811) DBG |   <uuid>f05d7928-3705-40b9-a168-a571110811ee</uuid>
	I0926 23:36:32.972385   44052 main.go:141] libmachine: (test-preload-627811) DBG |   <memory unit='KiB'>3145728</memory>
	I0926 23:36:32.972398   44052 main.go:141] libmachine: (test-preload-627811) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I0926 23:36:32.972407   44052 main.go:141] libmachine: (test-preload-627811) DBG |   <vcpu placement='static'>2</vcpu>
	I0926 23:36:32.972412   44052 main.go:141] libmachine: (test-preload-627811) DBG |   <os>
	I0926 23:36:32.972420   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0926 23:36:32.972425   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <boot dev='cdrom'/>
	I0926 23:36:32.972431   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <boot dev='hd'/>
	I0926 23:36:32.972436   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <bootmenu enable='no'/>
	I0926 23:36:32.972467   44052 main.go:141] libmachine: (test-preload-627811) DBG |   </os>
	I0926 23:36:32.972491   44052 main.go:141] libmachine: (test-preload-627811) DBG |   <features>
	I0926 23:36:32.972502   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <acpi/>
	I0926 23:36:32.972509   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <apic/>
	I0926 23:36:32.972517   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <pae/>
	I0926 23:36:32.972524   44052 main.go:141] libmachine: (test-preload-627811) DBG |   </features>
	I0926 23:36:32.972535   44052 main.go:141] libmachine: (test-preload-627811) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0926 23:36:32.972545   44052 main.go:141] libmachine: (test-preload-627811) DBG |   <clock offset='utc'/>
	I0926 23:36:32.972560   44052 main.go:141] libmachine: (test-preload-627811) DBG |   <on_poweroff>destroy</on_poweroff>
	I0926 23:36:32.972568   44052 main.go:141] libmachine: (test-preload-627811) DBG |   <on_reboot>restart</on_reboot>
	I0926 23:36:32.972609   44052 main.go:141] libmachine: (test-preload-627811) DBG |   <on_crash>destroy</on_crash>
	I0926 23:36:32.972630   44052 main.go:141] libmachine: (test-preload-627811) DBG |   <devices>
	I0926 23:36:32.972643   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0926 23:36:32.972651   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <disk type='file' device='cdrom'>
	I0926 23:36:32.972662   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <driver name='qemu' type='raw'/>
	I0926 23:36:32.972683   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/test-preload-627811/boot2docker.iso'/>
	I0926 23:36:32.972696   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <target dev='hdc' bus='scsi'/>
	I0926 23:36:32.972708   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <readonly/>
	I0926 23:36:32.972721   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0926 23:36:32.972729   44052 main.go:141] libmachine: (test-preload-627811) DBG |     </disk>
	I0926 23:36:32.972737   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <disk type='file' device='disk'>
	I0926 23:36:32.972749   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0926 23:36:32.972773   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/test-preload-627811/test-preload-627811.rawdisk'/>
	I0926 23:36:32.972789   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <target dev='hda' bus='virtio'/>
	I0926 23:36:32.972802   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0926 23:36:32.972812   44052 main.go:141] libmachine: (test-preload-627811) DBG |     </disk>
	I0926 23:36:32.972836   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0926 23:36:32.972862   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0926 23:36:32.972875   44052 main.go:141] libmachine: (test-preload-627811) DBG |     </controller>
	I0926 23:36:32.972898   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0926 23:36:32.972913   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0926 23:36:32.972936   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0926 23:36:32.972947   44052 main.go:141] libmachine: (test-preload-627811) DBG |     </controller>
	I0926 23:36:32.972958   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <interface type='network'>
	I0926 23:36:32.972967   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <mac address='52:54:00:a7:c2:a7'/>
	I0926 23:36:32.972976   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <source network='mk-test-preload-627811'/>
	I0926 23:36:32.972986   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <model type='virtio'/>
	I0926 23:36:32.973002   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0926 23:36:32.973013   44052 main.go:141] libmachine: (test-preload-627811) DBG |     </interface>
	I0926 23:36:32.973027   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <interface type='network'>
	I0926 23:36:32.973042   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <mac address='52:54:00:ac:a7:9e'/>
	I0926 23:36:32.973062   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <source network='default'/>
	I0926 23:36:32.973080   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <model type='virtio'/>
	I0926 23:36:32.973095   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0926 23:36:32.973105   44052 main.go:141] libmachine: (test-preload-627811) DBG |     </interface>
	I0926 23:36:32.973111   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <serial type='pty'>
	I0926 23:36:32.973120   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <target type='isa-serial' port='0'>
	I0926 23:36:32.973129   44052 main.go:141] libmachine: (test-preload-627811) DBG |         <model name='isa-serial'/>
	I0926 23:36:32.973140   44052 main.go:141] libmachine: (test-preload-627811) DBG |       </target>
	I0926 23:36:32.973161   44052 main.go:141] libmachine: (test-preload-627811) DBG |     </serial>
	I0926 23:36:32.973180   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <console type='pty'>
	I0926 23:36:32.973195   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <target type='serial' port='0'/>
	I0926 23:36:32.973211   44052 main.go:141] libmachine: (test-preload-627811) DBG |     </console>
	I0926 23:36:32.973223   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <input type='mouse' bus='ps2'/>
	I0926 23:36:32.973232   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <input type='keyboard' bus='ps2'/>
	I0926 23:36:32.973238   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <audio id='1' type='none'/>
	I0926 23:36:32.973245   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <memballoon model='virtio'>
	I0926 23:36:32.973266   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0926 23:36:32.973276   44052 main.go:141] libmachine: (test-preload-627811) DBG |     </memballoon>
	I0926 23:36:32.973288   44052 main.go:141] libmachine: (test-preload-627811) DBG |     <rng model='virtio'>
	I0926 23:36:32.973300   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <backend model='random'>/dev/random</backend>
	I0926 23:36:32.973312   44052 main.go:141] libmachine: (test-preload-627811) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0926 23:36:32.973319   44052 main.go:141] libmachine: (test-preload-627811) DBG |     </rng>
	I0926 23:36:32.973328   44052 main.go:141] libmachine: (test-preload-627811) DBG |   </devices>
	I0926 23:36:32.973338   44052 main.go:141] libmachine: (test-preload-627811) DBG | </domain>
	I0926 23:36:32.973349   44052 main.go:141] libmachine: (test-preload-627811) DBG | 
	I0926 23:36:34.238193   44052 main.go:141] libmachine: (test-preload-627811) waiting for domain to start...
	I0926 23:36:34.239755   44052 main.go:141] libmachine: (test-preload-627811) domain is now running
	I0926 23:36:34.239786   44052 main.go:141] libmachine: (test-preload-627811) waiting for IP...
	I0926 23:36:34.240761   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:34.241512   44052 main.go:141] libmachine: (test-preload-627811) found domain IP: 192.168.39.68
	I0926 23:36:34.241555   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has current primary IP address 192.168.39.68 and MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:34.241570   44052 main.go:141] libmachine: (test-preload-627811) reserving static IP address...
	I0926 23:36:34.242043   44052 main.go:141] libmachine: (test-preload-627811) DBG | found host DHCP lease matching {name: "test-preload-627811", mac: "52:54:00:a7:c2:a7", ip: "192.168.39.68"} in network mk-test-preload-627811: {Iface:virbr1 ExpiryTime:2025-09-27 00:35:29 +0000 UTC Type:0 Mac:52:54:00:a7:c2:a7 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:test-preload-627811 Clientid:01:52:54:00:a7:c2:a7}
	I0926 23:36:34.242069   44052 main.go:141] libmachine: (test-preload-627811) DBG | skip adding static IP to network mk-test-preload-627811 - found existing host DHCP lease matching {name: "test-preload-627811", mac: "52:54:00:a7:c2:a7", ip: "192.168.39.68"}
	I0926 23:36:34.242085   44052 main.go:141] libmachine: (test-preload-627811) reserved static IP address 192.168.39.68 for domain test-preload-627811
	I0926 23:36:34.242103   44052 main.go:141] libmachine: (test-preload-627811) waiting for SSH...
	I0926 23:36:34.242117   44052 main.go:141] libmachine: (test-preload-627811) DBG | Getting to WaitForSSH function...
	I0926 23:36:34.244553   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:34.244962   44052 main.go:141] libmachine: (test-preload-627811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:c2:a7", ip: ""} in network mk-test-preload-627811: {Iface:virbr1 ExpiryTime:2025-09-27 00:35:29 +0000 UTC Type:0 Mac:52:54:00:a7:c2:a7 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:test-preload-627811 Clientid:01:52:54:00:a7:c2:a7}
	I0926 23:36:34.244997   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined IP address 192.168.39.68 and MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:34.245115   44052 main.go:141] libmachine: (test-preload-627811) DBG | Using SSH client type: external
	I0926 23:36:34.245142   44052 main.go:141] libmachine: (test-preload-627811) DBG | Using SSH private key: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/test-preload-627811/id_rsa (-rw-------)
	I0926 23:36:34.245175   44052 main.go:141] libmachine: (test-preload-627811) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.68 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21642-6020/.minikube/machines/test-preload-627811/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0926 23:36:34.245196   44052 main.go:141] libmachine: (test-preload-627811) DBG | About to run SSH command:
	I0926 23:36:34.245226   44052 main.go:141] libmachine: (test-preload-627811) DBG | exit 0
	I0926 23:36:45.532457   44052 main.go:141] libmachine: (test-preload-627811) DBG | SSH cmd err, output: exit status 255: 
	I0926 23:36:45.532489   44052 main.go:141] libmachine: (test-preload-627811) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0926 23:36:45.532501   44052 main.go:141] libmachine: (test-preload-627811) DBG | command : exit 0
	I0926 23:36:45.532510   44052 main.go:141] libmachine: (test-preload-627811) DBG | err     : exit status 255
	I0926 23:36:45.532522   44052 main.go:141] libmachine: (test-preload-627811) DBG | output  : 
	I0926 23:36:48.533001   44052 main.go:141] libmachine: (test-preload-627811) DBG | Getting to WaitForSSH function...
	I0926 23:36:48.536357   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:48.536781   44052 main.go:141] libmachine: (test-preload-627811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:c2:a7", ip: ""} in network mk-test-preload-627811: {Iface:virbr1 ExpiryTime:2025-09-27 00:36:45 +0000 UTC Type:0 Mac:52:54:00:a7:c2:a7 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:test-preload-627811 Clientid:01:52:54:00:a7:c2:a7}
	I0926 23:36:48.536813   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined IP address 192.168.39.68 and MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:48.536977   44052 main.go:141] libmachine: (test-preload-627811) DBG | Using SSH client type: external
	I0926 23:36:48.537009   44052 main.go:141] libmachine: (test-preload-627811) DBG | Using SSH private key: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/test-preload-627811/id_rsa (-rw-------)
	I0926 23:36:48.537053   44052 main.go:141] libmachine: (test-preload-627811) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.68 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21642-6020/.minikube/machines/test-preload-627811/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0926 23:36:48.537074   44052 main.go:141] libmachine: (test-preload-627811) DBG | About to run SSH command:
	I0926 23:36:48.537093   44052 main.go:141] libmachine: (test-preload-627811) DBG | exit 0
	I0926 23:36:48.673489   44052 main.go:141] libmachine: (test-preload-627811) DBG | SSH cmd err, output: <nil>: 
	I0926 23:36:48.673934   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetConfigRaw
	I0926 23:36:48.674655   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetIP
	I0926 23:36:48.677895   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:48.678312   44052 main.go:141] libmachine: (test-preload-627811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:c2:a7", ip: ""} in network mk-test-preload-627811: {Iface:virbr1 ExpiryTime:2025-09-27 00:36:45 +0000 UTC Type:0 Mac:52:54:00:a7:c2:a7 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:test-preload-627811 Clientid:01:52:54:00:a7:c2:a7}
	I0926 23:36:48.678340   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined IP address 192.168.39.68 and MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:48.678654   44052 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/test-preload-627811/config.json ...
	I0926 23:36:48.678888   44052 machine.go:93] provisionDockerMachine start ...
	I0926 23:36:48.678906   44052 main.go:141] libmachine: (test-preload-627811) Calling .DriverName
	I0926 23:36:48.679128   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHHostname
	I0926 23:36:48.682030   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:48.682443   44052 main.go:141] libmachine: (test-preload-627811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:c2:a7", ip: ""} in network mk-test-preload-627811: {Iface:virbr1 ExpiryTime:2025-09-27 00:36:45 +0000 UTC Type:0 Mac:52:54:00:a7:c2:a7 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:test-preload-627811 Clientid:01:52:54:00:a7:c2:a7}
	I0926 23:36:48.682466   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined IP address 192.168.39.68 and MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:48.682674   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHPort
	I0926 23:36:48.682857   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHKeyPath
	I0926 23:36:48.683021   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHKeyPath
	I0926 23:36:48.683183   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHUsername
	I0926 23:36:48.683343   44052 main.go:141] libmachine: Using SSH client type: native
	I0926 23:36:48.683707   44052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0926 23:36:48.683725   44052 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 23:36:48.802042   44052 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 23:36:48.802066   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetMachineName
	I0926 23:36:48.802305   44052 buildroot.go:166] provisioning hostname "test-preload-627811"
	I0926 23:36:48.802333   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetMachineName
	I0926 23:36:48.802534   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHHostname
	I0926 23:36:48.805388   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:48.805789   44052 main.go:141] libmachine: (test-preload-627811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:c2:a7", ip: ""} in network mk-test-preload-627811: {Iface:virbr1 ExpiryTime:2025-09-27 00:36:45 +0000 UTC Type:0 Mac:52:54:00:a7:c2:a7 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:test-preload-627811 Clientid:01:52:54:00:a7:c2:a7}
	I0926 23:36:48.805819   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined IP address 192.168.39.68 and MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:48.805972   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHPort
	I0926 23:36:48.806155   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHKeyPath
	I0926 23:36:48.806316   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHKeyPath
	I0926 23:36:48.806455   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHUsername
	I0926 23:36:48.806614   44052 main.go:141] libmachine: Using SSH client type: native
	I0926 23:36:48.806813   44052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0926 23:36:48.806839   44052 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-627811 && echo "test-preload-627811" | sudo tee /etc/hostname
	I0926 23:36:48.941733   44052 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-627811
	
	I0926 23:36:48.941777   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHHostname
	I0926 23:36:48.945074   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:48.945514   44052 main.go:141] libmachine: (test-preload-627811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:c2:a7", ip: ""} in network mk-test-preload-627811: {Iface:virbr1 ExpiryTime:2025-09-27 00:36:45 +0000 UTC Type:0 Mac:52:54:00:a7:c2:a7 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:test-preload-627811 Clientid:01:52:54:00:a7:c2:a7}
	I0926 23:36:48.945539   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined IP address 192.168.39.68 and MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:48.945709   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHPort
	I0926 23:36:48.945899   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHKeyPath
	I0926 23:36:48.946078   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHKeyPath
	I0926 23:36:48.946213   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHUsername
	I0926 23:36:48.946379   44052 main.go:141] libmachine: Using SSH client type: native
	I0926 23:36:48.946658   44052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0926 23:36:48.946685   44052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-627811' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-627811/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-627811' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 23:36:49.075108   44052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 23:36:49.075137   44052 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21642-6020/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-6020/.minikube}
	I0926 23:36:49.075168   44052 buildroot.go:174] setting up certificates
	I0926 23:36:49.075181   44052 provision.go:84] configureAuth start
	I0926 23:36:49.075192   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetMachineName
	I0926 23:36:49.075481   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetIP
	I0926 23:36:49.078867   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:49.079308   44052 main.go:141] libmachine: (test-preload-627811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:c2:a7", ip: ""} in network mk-test-preload-627811: {Iface:virbr1 ExpiryTime:2025-09-27 00:36:45 +0000 UTC Type:0 Mac:52:54:00:a7:c2:a7 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:test-preload-627811 Clientid:01:52:54:00:a7:c2:a7}
	I0926 23:36:49.079344   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined IP address 192.168.39.68 and MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:49.079527   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHHostname
	I0926 23:36:49.082055   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:49.082466   44052 main.go:141] libmachine: (test-preload-627811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:c2:a7", ip: ""} in network mk-test-preload-627811: {Iface:virbr1 ExpiryTime:2025-09-27 00:36:45 +0000 UTC Type:0 Mac:52:54:00:a7:c2:a7 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:test-preload-627811 Clientid:01:52:54:00:a7:c2:a7}
	I0926 23:36:49.082496   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined IP address 192.168.39.68 and MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:49.082678   44052 provision.go:143] copyHostCerts
	I0926 23:36:49.082747   44052 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem, removing ...
	I0926 23:36:49.082768   44052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem
	I0926 23:36:49.082880   44052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem (1675 bytes)
	I0926 23:36:49.083082   44052 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem, removing ...
	I0926 23:36:49.083095   44052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem
	I0926 23:36:49.083129   44052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem (1082 bytes)
	I0926 23:36:49.083210   44052 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem, removing ...
	I0926 23:36:49.083221   44052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem
	I0926 23:36:49.083255   44052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem (1123 bytes)
	I0926 23:36:49.083331   44052 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem org=jenkins.test-preload-627811 san=[127.0.0.1 192.168.39.68 localhost minikube test-preload-627811]
	I0926 23:36:49.404255   44052 provision.go:177] copyRemoteCerts
	I0926 23:36:49.404321   44052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 23:36:49.404350   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHHostname
	I0926 23:36:49.407683   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:49.408105   44052 main.go:141] libmachine: (test-preload-627811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:c2:a7", ip: ""} in network mk-test-preload-627811: {Iface:virbr1 ExpiryTime:2025-09-27 00:36:45 +0000 UTC Type:0 Mac:52:54:00:a7:c2:a7 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:test-preload-627811 Clientid:01:52:54:00:a7:c2:a7}
	I0926 23:36:49.408132   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined IP address 192.168.39.68 and MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:49.408334   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHPort
	I0926 23:36:49.408550   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHKeyPath
	I0926 23:36:49.408758   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHUsername
	I0926 23:36:49.408946   44052 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/test-preload-627811/id_rsa Username:docker}
	I0926 23:36:49.501258   44052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0926 23:36:49.534666   44052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 23:36:49.567466   44052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 23:36:49.601207   44052 provision.go:87] duration metric: took 526.010472ms to configureAuth
	I0926 23:36:49.601242   44052 buildroot.go:189] setting minikube options for container-runtime
	I0926 23:36:49.601462   44052 config.go:182] Loaded profile config "test-preload-627811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0926 23:36:49.601541   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHHostname
	I0926 23:36:49.604775   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:49.605232   44052 main.go:141] libmachine: (test-preload-627811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:c2:a7", ip: ""} in network mk-test-preload-627811: {Iface:virbr1 ExpiryTime:2025-09-27 00:36:45 +0000 UTC Type:0 Mac:52:54:00:a7:c2:a7 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:test-preload-627811 Clientid:01:52:54:00:a7:c2:a7}
	I0926 23:36:49.605258   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined IP address 192.168.39.68 and MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:49.605480   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHPort
	I0926 23:36:49.605671   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHKeyPath
	I0926 23:36:49.605870   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHKeyPath
	I0926 23:36:49.606029   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHUsername
	I0926 23:36:49.606203   44052 main.go:141] libmachine: Using SSH client type: native
	I0926 23:36:49.606392   44052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0926 23:36:49.606406   44052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0926 23:36:49.870382   44052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0926 23:36:49.870408   44052 machine.go:96] duration metric: took 1.191506952s to provisionDockerMachine
	I0926 23:36:49.870426   44052 start.go:293] postStartSetup for "test-preload-627811" (driver="kvm2")
	I0926 23:36:49.870441   44052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 23:36:49.870460   44052 main.go:141] libmachine: (test-preload-627811) Calling .DriverName
	I0926 23:36:49.870795   44052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 23:36:49.870857   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHHostname
	I0926 23:36:49.873836   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:49.874282   44052 main.go:141] libmachine: (test-preload-627811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:c2:a7", ip: ""} in network mk-test-preload-627811: {Iface:virbr1 ExpiryTime:2025-09-27 00:36:45 +0000 UTC Type:0 Mac:52:54:00:a7:c2:a7 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:test-preload-627811 Clientid:01:52:54:00:a7:c2:a7}
	I0926 23:36:49.874308   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined IP address 192.168.39.68 and MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:49.874500   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHPort
	I0926 23:36:49.874715   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHKeyPath
	I0926 23:36:49.874933   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHUsername
	I0926 23:36:49.875099   44052 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/test-preload-627811/id_rsa Username:docker}
	I0926 23:36:49.966034   44052 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 23:36:49.971439   44052 info.go:137] Remote host: Buildroot 2025.02
	I0926 23:36:49.971460   44052 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-6020/.minikube/addons for local assets ...
	I0926 23:36:49.971533   44052 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-6020/.minikube/files for local assets ...
	I0926 23:36:49.971605   44052 filesync.go:149] local asset: /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem -> 99142.pem in /etc/ssl/certs
	I0926 23:36:49.971705   44052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 23:36:49.984200   44052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem --> /etc/ssl/certs/99142.pem (1708 bytes)
	I0926 23:36:50.016951   44052 start.go:296] duration metric: took 146.510736ms for postStartSetup
	I0926 23:36:50.016988   44052 fix.go:56] duration metric: took 17.067678383s for fixHost
	I0926 23:36:50.017010   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHHostname
	I0926 23:36:50.020179   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:50.020680   44052 main.go:141] libmachine: (test-preload-627811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:c2:a7", ip: ""} in network mk-test-preload-627811: {Iface:virbr1 ExpiryTime:2025-09-27 00:36:45 +0000 UTC Type:0 Mac:52:54:00:a7:c2:a7 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:test-preload-627811 Clientid:01:52:54:00:a7:c2:a7}
	I0926 23:36:50.020714   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined IP address 192.168.39.68 and MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:50.020919   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHPort
	I0926 23:36:50.021120   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHKeyPath
	I0926 23:36:50.021264   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHKeyPath
	I0926 23:36:50.021427   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHUsername
	I0926 23:36:50.021607   44052 main.go:141] libmachine: Using SSH client type: native
	I0926 23:36:50.021791   44052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0926 23:36:50.021801   44052 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 23:36:50.141162   44052 main.go:141] libmachine: SSH cmd err, output: <nil>: 1758929810.106757093
	
	I0926 23:36:50.141186   44052 fix.go:216] guest clock: 1758929810.106757093
	I0926 23:36:50.141194   44052 fix.go:229] Guest: 2025-09-26 23:36:50.106757093 +0000 UTC Remote: 2025-09-26 23:36:50.016992028 +0000 UTC m=+20.444985239 (delta=89.765065ms)
	I0926 23:36:50.141221   44052 fix.go:200] guest clock delta is within tolerance: 89.765065ms
	I0926 23:36:50.141228   44052 start.go:83] releasing machines lock for "test-preload-627811", held for 17.191929231s
	I0926 23:36:50.141252   44052 main.go:141] libmachine: (test-preload-627811) Calling .DriverName
	I0926 23:36:50.141537   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetIP
	I0926 23:36:50.144598   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:50.145023   44052 main.go:141] libmachine: (test-preload-627811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:c2:a7", ip: ""} in network mk-test-preload-627811: {Iface:virbr1 ExpiryTime:2025-09-27 00:36:45 +0000 UTC Type:0 Mac:52:54:00:a7:c2:a7 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:test-preload-627811 Clientid:01:52:54:00:a7:c2:a7}
	I0926 23:36:50.145043   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined IP address 192.168.39.68 and MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:50.145195   44052 main.go:141] libmachine: (test-preload-627811) Calling .DriverName
	I0926 23:36:50.145721   44052 main.go:141] libmachine: (test-preload-627811) Calling .DriverName
	I0926 23:36:50.145907   44052 main.go:141] libmachine: (test-preload-627811) Calling .DriverName
	I0926 23:36:50.145987   44052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 23:36:50.146047   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHHostname
	I0926 23:36:50.146108   44052 ssh_runner.go:195] Run: cat /version.json
	I0926 23:36:50.146134   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHHostname
	I0926 23:36:50.149395   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:50.149530   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:50.149820   44052 main.go:141] libmachine: (test-preload-627811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:c2:a7", ip: ""} in network mk-test-preload-627811: {Iface:virbr1 ExpiryTime:2025-09-27 00:36:45 +0000 UTC Type:0 Mac:52:54:00:a7:c2:a7 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:test-preload-627811 Clientid:01:52:54:00:a7:c2:a7}
	I0926 23:36:50.149864   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined IP address 192.168.39.68 and MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:50.149911   44052 main.go:141] libmachine: (test-preload-627811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:c2:a7", ip: ""} in network mk-test-preload-627811: {Iface:virbr1 ExpiryTime:2025-09-27 00:36:45 +0000 UTC Type:0 Mac:52:54:00:a7:c2:a7 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:test-preload-627811 Clientid:01:52:54:00:a7:c2:a7}
	I0926 23:36:50.149954   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined IP address 192.168.39.68 and MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:50.150051   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHPort
	I0926 23:36:50.150228   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHKeyPath
	I0926 23:36:50.150252   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHPort
	I0926 23:36:50.150451   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHUsername
	I0926 23:36:50.150457   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHKeyPath
	I0926 23:36:50.150634   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHUsername
	I0926 23:36:50.150634   44052 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/test-preload-627811/id_rsa Username:docker}
	I0926 23:36:50.150774   44052 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/test-preload-627811/id_rsa Username:docker}
	I0926 23:36:50.261179   44052 ssh_runner.go:195] Run: systemctl --version
	I0926 23:36:50.268472   44052 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0926 23:36:50.418508   44052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 23:36:50.426224   44052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 23:36:50.426308   44052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 23:36:50.448443   44052 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 23:36:50.448477   44052 start.go:495] detecting cgroup driver to use...
	I0926 23:36:50.448541   44052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 23:36:50.467321   44052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 23:36:50.486138   44052 docker.go:218] disabling cri-docker service (if available) ...
	I0926 23:36:50.486204   44052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0926 23:36:50.504927   44052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0926 23:36:50.522548   44052 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0926 23:36:50.674694   44052 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0926 23:36:50.908396   44052 docker.go:234] disabling docker service ...
	I0926 23:36:50.908482   44052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0926 23:36:50.928264   44052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0926 23:36:50.944046   44052 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0926 23:36:51.105604   44052 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0926 23:36:51.255186   44052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 23:36:51.273712   44052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 23:36:51.299439   44052 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0926 23:36:51.299521   44052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:36:51.313502   44052 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0926 23:36:51.313580   44052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:36:51.327243   44052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:36:51.341093   44052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:36:51.355104   44052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 23:36:51.369982   44052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:36:51.383752   44052 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:36:51.406133   44052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:36:51.419932   44052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 23:36:51.432557   44052 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 23:36:51.432618   44052 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 23:36:51.460462   44052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 23:36:51.473908   44052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:36:51.624922   44052 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0926 23:36:51.746469   44052 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0926 23:36:51.746544   44052 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0926 23:36:51.752646   44052 start.go:563] Will wait 60s for crictl version
	I0926 23:36:51.752718   44052 ssh_runner.go:195] Run: which crictl
	I0926 23:36:51.757277   44052 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 23:36:51.799409   44052 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0926 23:36:51.799483   44052 ssh_runner.go:195] Run: crio --version
	I0926 23:36:51.831381   44052 ssh_runner.go:195] Run: crio --version
	I0926 23:36:51.863901   44052 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0926 23:36:51.865376   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetIP
	I0926 23:36:51.868401   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:51.868776   44052 main.go:141] libmachine: (test-preload-627811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:c2:a7", ip: ""} in network mk-test-preload-627811: {Iface:virbr1 ExpiryTime:2025-09-27 00:36:45 +0000 UTC Type:0 Mac:52:54:00:a7:c2:a7 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:test-preload-627811 Clientid:01:52:54:00:a7:c2:a7}
	I0926 23:36:51.868805   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined IP address 192.168.39.68 and MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:36:51.869097   44052 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0926 23:36:51.873914   44052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 23:36:51.889590   44052 kubeadm.go:883] updating cluster {Name:test-preload-627811 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test
-preload-627811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 23:36:51.889730   44052 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0926 23:36:51.889778   44052 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:36:51.930363   44052 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0926 23:36:51.930433   44052 ssh_runner.go:195] Run: which lz4
	I0926 23:36:51.935354   44052 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0926 23:36:51.940893   44052 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0926 23:36:51.940957   44052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0926 23:36:53.610376   44052 crio.go:462] duration metric: took 1.675048921s to copy over tarball
	I0926 23:36:53.610449   44052 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0926 23:36:55.345359   44052 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.734880372s)
	I0926 23:36:55.345388   44052 crio.go:469] duration metric: took 1.73498318s to extract the tarball
	I0926 23:36:55.345410   44052 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0926 23:36:55.386598   44052 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:36:55.437806   44052 crio.go:514] all images are preloaded for cri-o runtime.
	I0926 23:36:55.437844   44052 cache_images.go:85] Images are preloaded, skipping loading
	I0926 23:36:55.437854   44052 kubeadm.go:934] updating node { 192.168.39.68 8443 v1.32.0 crio true true} ...
	I0926 23:36:55.437943   44052 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-627811 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-627811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 23:36:55.438045   44052 ssh_runner.go:195] Run: crio config
	I0926 23:36:55.487183   44052 cni.go:84] Creating CNI manager for ""
	I0926 23:36:55.487205   44052 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 23:36:55.487220   44052 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 23:36:55.487239   44052 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.68 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-627811 NodeName:test-preload-627811 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 23:36:55.487350   44052 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.68
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-627811"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.68"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.68"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 23:36:55.487409   44052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0926 23:36:55.501148   44052 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 23:36:55.501222   44052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 23:36:55.514498   44052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0926 23:36:55.537368   44052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 23:36:55.560667   44052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I0926 23:36:55.584521   44052 ssh_runner.go:195] Run: grep 192.168.39.68	control-plane.minikube.internal$ /etc/hosts
	I0926 23:36:55.589044   44052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.68	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 23:36:55.604771   44052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:36:55.752446   44052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:36:55.786293   44052 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/test-preload-627811 for IP: 192.168.39.68
	I0926 23:36:55.786319   44052 certs.go:195] generating shared ca certs ...
	I0926 23:36:55.786343   44052 certs.go:227] acquiring lock for ca certs: {Name:mk9e164f84dd227cf84a459eec91beae2bb75a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:36:55.786537   44052 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key
	I0926 23:36:55.786602   44052 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key
	I0926 23:36:55.786613   44052 certs.go:257] generating profile certs ...
	I0926 23:36:55.786728   44052 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/test-preload-627811/client.key
	I0926 23:36:55.786868   44052 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/test-preload-627811/apiserver.key.5a4214d0
	I0926 23:36:55.786949   44052 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/test-preload-627811/proxy-client.key
	I0926 23:36:55.787113   44052 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914.pem (1338 bytes)
	W0926 23:36:55.787161   44052 certs.go:480] ignoring /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914_empty.pem, impossibly tiny 0 bytes
	I0926 23:36:55.787178   44052 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem (1679 bytes)
	I0926 23:36:55.787215   44052 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem (1082 bytes)
	I0926 23:36:55.787264   44052 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem (1123 bytes)
	I0926 23:36:55.787305   44052 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem (1675 bytes)
	I0926 23:36:55.787366   44052 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem (1708 bytes)
	I0926 23:36:55.788175   44052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 23:36:55.828625   44052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0926 23:36:55.863304   44052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 23:36:55.895447   44052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 23:36:55.927430   44052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/test-preload-627811/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0926 23:36:55.959768   44052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/test-preload-627811/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 23:36:55.991114   44052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/test-preload-627811/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 23:36:56.022085   44052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/test-preload-627811/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0926 23:36:56.053691   44052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 23:36:56.083945   44052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914.pem --> /usr/share/ca-certificates/9914.pem (1338 bytes)
	I0926 23:36:56.115243   44052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem --> /usr/share/ca-certificates/99142.pem (1708 bytes)
	I0926 23:36:56.146933   44052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 23:36:56.169871   44052 ssh_runner.go:195] Run: openssl version
	I0926 23:36:56.176888   44052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99142.pem && ln -fs /usr/share/ca-certificates/99142.pem /etc/ssl/certs/99142.pem"
	I0926 23:36:56.191785   44052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99142.pem
	I0926 23:36:56.197542   44052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 26 22:43 /usr/share/ca-certificates/99142.pem
	I0926 23:36:56.197608   44052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99142.pem
	I0926 23:36:56.205643   44052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99142.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 23:36:56.220337   44052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 23:36:56.234896   44052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:36:56.240727   44052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:36:56.240800   44052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:36:56.248424   44052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 23:36:56.263058   44052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9914.pem && ln -fs /usr/share/ca-certificates/9914.pem /etc/ssl/certs/9914.pem"
	I0926 23:36:56.276894   44052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9914.pem
	I0926 23:36:56.282870   44052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 26 22:43 /usr/share/ca-certificates/9914.pem
	I0926 23:36:56.282932   44052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9914.pem
	I0926 23:36:56.290524   44052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9914.pem /etc/ssl/certs/51391683.0"
	I0926 23:36:56.305501   44052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 23:36:56.311877   44052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0926 23:36:56.320339   44052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0926 23:36:56.328572   44052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0926 23:36:56.336903   44052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0926 23:36:56.345047   44052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0926 23:36:56.353269   44052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0926 23:36:56.361590   44052 kubeadm.go:400] StartCluster: {Name:test-preload-627811 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-pr
eload-627811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 23:36:56.361667   44052 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0926 23:36:56.361729   44052 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0926 23:36:56.402841   44052 cri.go:89] found id: ""
	I0926 23:36:56.402911   44052 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 23:36:56.416607   44052 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0926 23:36:56.416635   44052 kubeadm.go:597] restartPrimaryControlPlane start ...
	I0926 23:36:56.416679   44052 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0926 23:36:56.429472   44052 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0926 23:36:56.429874   44052 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-627811" does not appear in /home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 23:36:56.429977   44052 kubeconfig.go:62] /home/jenkins/minikube-integration/21642-6020/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-627811" cluster setting kubeconfig missing "test-preload-627811" context setting]
	I0926 23:36:56.430232   44052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/kubeconfig: {Name:mkc92bf76d8ba21d0a2b0bb28107401b61549063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:36:56.430702   44052 kapi.go:59] client config for test-preload-627811: &rest.Config{Host:"https://192.168.39.68:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21642-6020/.minikube/profiles/test-preload-627811/client.crt", KeyFile:"/home/jenkins/minikube-integration/21642-6020/.minikube/profiles/test-preload-627811/client.key", CAFile:"/home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 23:36:56.431100   44052 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0926 23:36:56.431114   44052 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0926 23:36:56.431118   44052 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0926 23:36:56.431122   44052 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0926 23:36:56.431125   44052 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0926 23:36:56.431395   44052 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0926 23:36:56.443778   44052 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.68
	I0926 23:36:56.443815   44052 kubeadm.go:1160] stopping kube-system containers ...
	I0926 23:36:56.443836   44052 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0926 23:36:56.443881   44052 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0926 23:36:56.484725   44052 cri.go:89] found id: ""
	I0926 23:36:56.484811   44052 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0926 23:36:56.509241   44052 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 23:36:56.522223   44052 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 23:36:56.522242   44052 kubeadm.go:157] found existing configuration files:
	
	I0926 23:36:56.522286   44052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 23:36:56.534386   44052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 23:36:56.534453   44052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 23:36:56.547011   44052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 23:36:56.558993   44052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 23:36:56.559049   44052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 23:36:56.572080   44052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 23:36:56.583924   44052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 23:36:56.584007   44052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 23:36:56.596225   44052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 23:36:56.607426   44052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 23:36:56.607491   44052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 23:36:56.619997   44052 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 23:36:56.632812   44052 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 23:36:56.697226   44052 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 23:36:57.825662   44052 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.128397779s)
	I0926 23:36:57.825744   44052 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0926 23:36:58.084201   44052 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 23:36:58.163980   44052 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0926 23:36:58.274065   44052 api_server.go:52] waiting for apiserver process to appear ...
	I0926 23:36:58.274148   44052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:36:58.775059   44052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:36:59.274678   44052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:36:59.774613   44052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:37:00.274599   44052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:37:00.352409   44052 api_server.go:72] duration metric: took 2.078336748s to wait for apiserver process to appear ...
	I0926 23:37:00.352453   44052 api_server.go:88] waiting for apiserver healthz status ...
	I0926 23:37:00.352478   44052 api_server.go:253] Checking apiserver healthz at https://192.168.39.68:8443/healthz ...
	I0926 23:37:02.879121   44052 api_server.go:279] https://192.168.39.68:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0926 23:37:02.879157   44052 api_server.go:103] status: https://192.168.39.68:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0926 23:37:02.879175   44052 api_server.go:253] Checking apiserver healthz at https://192.168.39.68:8443/healthz ...
	I0926 23:37:02.955661   44052 api_server.go:279] https://192.168.39.68:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0926 23:37:02.955694   44052 api_server.go:103] status: https://192.168.39.68:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0926 23:37:03.352740   44052 api_server.go:253] Checking apiserver healthz at https://192.168.39.68:8443/healthz ...
	I0926 23:37:03.368661   44052 api_server.go:279] https://192.168.39.68:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0926 23:37:03.368686   44052 api_server.go:103] status: https://192.168.39.68:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0926 23:37:03.853416   44052 api_server.go:253] Checking apiserver healthz at https://192.168.39.68:8443/healthz ...
	I0926 23:37:03.859111   44052 api_server.go:279] https://192.168.39.68:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0926 23:37:03.859144   44052 api_server.go:103] status: https://192.168.39.68:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0926 23:37:04.352645   44052 api_server.go:253] Checking apiserver healthz at https://192.168.39.68:8443/healthz ...
	I0926 23:37:04.357124   44052 api_server.go:279] https://192.168.39.68:8443/healthz returned 200:
	ok
	I0926 23:37:04.363469   44052 api_server.go:141] control plane version: v1.32.0
	I0926 23:37:04.363501   44052 api_server.go:131] duration metric: took 4.011038649s to wait for apiserver health ...
	I0926 23:37:04.363512   44052 cni.go:84] Creating CNI manager for ""
	I0926 23:37:04.363520   44052 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 23:37:04.365280   44052 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0926 23:37:04.366997   44052 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 23:37:04.394876   44052 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 23:37:04.426449   44052 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 23:37:04.432980   44052 system_pods.go:59] 7 kube-system pods found
	I0926 23:37:04.433035   44052 system_pods.go:61] "coredns-668d6bf9bc-l42bz" [efa565eb-1043-4e91-b033-f3728fdc1e62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:37:04.433048   44052 system_pods.go:61] "etcd-test-preload-627811" [7fc594d7-bdf0-43f7-8da2-c0639475c610] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:37:04.433059   44052 system_pods.go:61] "kube-apiserver-test-preload-627811" [83ba3777-781c-4779-92b7-da50c077d756] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:37:04.433069   44052 system_pods.go:61] "kube-controller-manager-test-preload-627811" [efca7380-67ac-4070-8a22-198d21d2d0b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:37:04.433077   44052 system_pods.go:61] "kube-proxy-gmzk7" [471b4ccc-716d-4b39-aa1e-48ae441c7509] Running
	I0926 23:37:04.433087   44052 system_pods.go:61] "kube-scheduler-test-preload-627811" [1cc59e5a-7f29-4d32-99ba-5d616cf3be2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:37:04.433098   44052 system_pods.go:61] "storage-provisioner" [661427d2-9638-47f9-bba5-2369178c0a98] Running
	I0926 23:37:04.433112   44052 system_pods.go:74] duration metric: took 6.64051ms to wait for pod list to return data ...
	I0926 23:37:04.433124   44052 node_conditions.go:102] verifying NodePressure condition ...
	I0926 23:37:04.436482   44052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 23:37:04.436503   44052 node_conditions.go:123] node cpu capacity is 2
	I0926 23:37:04.436515   44052 node_conditions.go:105] duration metric: took 3.386172ms to run NodePressure ...
	I0926 23:37:04.436562   44052 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 23:37:04.696315   44052 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I0926 23:37:04.699552   44052 kubeadm.go:743] kubelet initialised
	I0926 23:37:04.699575   44052 kubeadm.go:744] duration metric: took 3.236677ms waiting for restarted kubelet to initialise ...
	I0926 23:37:04.699595   44052 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 23:37:04.716345   44052 ops.go:34] apiserver oom_adj: -16
	I0926 23:37:04.716373   44052 kubeadm.go:601] duration metric: took 8.299731925s to restartPrimaryControlPlane
	I0926 23:37:04.716386   44052 kubeadm.go:402] duration metric: took 8.35480234s to StartCluster
	I0926 23:37:04.716411   44052 settings.go:142] acquiring lock: {Name:mk8a46d5a99d51096f5a73696c8b5f570ce357f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:37:04.716504   44052 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 23:37:04.717119   44052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/kubeconfig: {Name:mkc92bf76d8ba21d0a2b0bb28107401b61549063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:37:04.717337   44052 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 23:37:04.717422   44052 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 23:37:04.717523   44052 addons.go:69] Setting storage-provisioner=true in profile "test-preload-627811"
	I0926 23:37:04.717554   44052 addons.go:238] Setting addon storage-provisioner=true in "test-preload-627811"
	I0926 23:37:04.717548   44052 addons.go:69] Setting default-storageclass=true in profile "test-preload-627811"
	W0926 23:37:04.717566   44052 addons.go:247] addon storage-provisioner should already be in state true
	I0926 23:37:04.717578   44052 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-627811"
	I0926 23:37:04.717613   44052 host.go:66] Checking if "test-preload-627811" exists ...
	I0926 23:37:04.717588   44052 config.go:182] Loaded profile config "test-preload-627811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0926 23:37:04.718042   44052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:37:04.718051   44052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:37:04.718087   44052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:37:04.718227   44052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:37:04.718973   44052 out.go:179] * Verifying Kubernetes components...
	I0926 23:37:04.720429   44052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:37:04.732184   44052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37121
	I0926 23:37:04.732253   44052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40083
	I0926 23:37:04.732704   44052 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:37:04.732739   44052 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:37:04.733204   44052 main.go:141] libmachine: Using API Version  1
	I0926 23:37:04.733223   44052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:37:04.733324   44052 main.go:141] libmachine: Using API Version  1
	I0926 23:37:04.733353   44052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:37:04.733606   44052 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:37:04.733707   44052 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:37:04.733812   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetState
	I0926 23:37:04.734174   44052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:37:04.734207   44052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:37:04.736222   44052 kapi.go:59] client config for test-preload-627811: &rest.Config{Host:"https://192.168.39.68:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21642-6020/.minikube/profiles/test-preload-627811/client.crt", KeyFile:"/home/jenkins/minikube-integration/21642-6020/.minikube/profiles/test-preload-627811/client.key", CAFile:"/home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 23:37:04.736582   44052 addons.go:238] Setting addon default-storageclass=true in "test-preload-627811"
	W0926 23:37:04.736600   44052 addons.go:247] addon default-storageclass should already be in state true
	I0926 23:37:04.736630   44052 host.go:66] Checking if "test-preload-627811" exists ...
	I0926 23:37:04.736929   44052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:37:04.736972   44052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:37:04.748715   44052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40935
	I0926 23:37:04.749206   44052 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:37:04.749655   44052 main.go:141] libmachine: Using API Version  1
	I0926 23:37:04.749678   44052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:37:04.750095   44052 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:37:04.750285   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetState
	I0926 23:37:04.752275   44052 main.go:141] libmachine: (test-preload-627811) Calling .DriverName
	I0926 23:37:04.752651   44052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42689
	I0926 23:37:04.753032   44052 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:37:04.753441   44052 main.go:141] libmachine: Using API Version  1
	I0926 23:37:04.753460   44052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:37:04.753787   44052 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:37:04.754238   44052 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 23:37:04.754398   44052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:37:04.754436   44052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:37:04.757994   44052 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:37:04.758012   44052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 23:37:04.758028   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHHostname
	I0926 23:37:04.761865   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:37:04.762432   44052 main.go:141] libmachine: (test-preload-627811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:c2:a7", ip: ""} in network mk-test-preload-627811: {Iface:virbr1 ExpiryTime:2025-09-27 00:36:45 +0000 UTC Type:0 Mac:52:54:00:a7:c2:a7 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:test-preload-627811 Clientid:01:52:54:00:a7:c2:a7}
	I0926 23:37:04.762465   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined IP address 192.168.39.68 and MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:37:04.762657   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHPort
	I0926 23:37:04.762848   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHKeyPath
	I0926 23:37:04.763024   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHUsername
	I0926 23:37:04.763207   44052 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/test-preload-627811/id_rsa Username:docker}
	I0926 23:37:04.769135   44052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36073
	I0926 23:37:04.769560   44052 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:37:04.769975   44052 main.go:141] libmachine: Using API Version  1
	I0926 23:37:04.770000   44052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:37:04.770359   44052 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:37:04.770662   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetState
	I0926 23:37:04.772606   44052 main.go:141] libmachine: (test-preload-627811) Calling .DriverName
	I0926 23:37:04.772861   44052 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 23:37:04.772878   44052 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 23:37:04.772899   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHHostname
	I0926 23:37:04.776595   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:37:04.777111   44052 main.go:141] libmachine: (test-preload-627811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:c2:a7", ip: ""} in network mk-test-preload-627811: {Iface:virbr1 ExpiryTime:2025-09-27 00:36:45 +0000 UTC Type:0 Mac:52:54:00:a7:c2:a7 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:test-preload-627811 Clientid:01:52:54:00:a7:c2:a7}
	I0926 23:37:04.777149   44052 main.go:141] libmachine: (test-preload-627811) DBG | domain test-preload-627811 has defined IP address 192.168.39.68 and MAC address 52:54:00:a7:c2:a7 in network mk-test-preload-627811
	I0926 23:37:04.777379   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHPort
	I0926 23:37:04.777541   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHKeyPath
	I0926 23:37:04.777691   44052 main.go:141] libmachine: (test-preload-627811) Calling .GetSSHUsername
	I0926 23:37:04.777844   44052 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/test-preload-627811/id_rsa Username:docker}
	I0926 23:37:04.970791   44052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:37:04.999374   44052 node_ready.go:35] waiting up to 6m0s for node "test-preload-627811" to be "Ready" ...
	I0926 23:37:05.006050   44052 node_ready.go:49] node "test-preload-627811" is "Ready"
	I0926 23:37:05.006079   44052 node_ready.go:38] duration metric: took 6.673933ms for node "test-preload-627811" to be "Ready" ...
	I0926 23:37:05.006094   44052 api_server.go:52] waiting for apiserver process to appear ...
	I0926 23:37:05.006149   44052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:37:05.035126   44052 api_server.go:72] duration metric: took 317.761862ms to wait for apiserver process to appear ...
	I0926 23:37:05.035151   44052 api_server.go:88] waiting for apiserver healthz status ...
	I0926 23:37:05.035166   44052 api_server.go:253] Checking apiserver healthz at https://192.168.39.68:8443/healthz ...
	I0926 23:37:05.042799   44052 api_server.go:279] https://192.168.39.68:8443/healthz returned 200:
	ok
	I0926 23:37:05.043615   44052 api_server.go:141] control plane version: v1.32.0
	I0926 23:37:05.043642   44052 api_server.go:131] duration metric: took 8.485751ms to wait for apiserver health ...
	I0926 23:37:05.043650   44052 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 23:37:05.052844   44052 system_pods.go:59] 7 kube-system pods found
	I0926 23:37:05.052879   44052 system_pods.go:61] "coredns-668d6bf9bc-l42bz" [efa565eb-1043-4e91-b033-f3728fdc1e62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:37:05.052890   44052 system_pods.go:61] "etcd-test-preload-627811" [7fc594d7-bdf0-43f7-8da2-c0639475c610] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:37:05.052902   44052 system_pods.go:61] "kube-apiserver-test-preload-627811" [83ba3777-781c-4779-92b7-da50c077d756] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:37:05.052914   44052 system_pods.go:61] "kube-controller-manager-test-preload-627811" [efca7380-67ac-4070-8a22-198d21d2d0b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:37:05.052920   44052 system_pods.go:61] "kube-proxy-gmzk7" [471b4ccc-716d-4b39-aa1e-48ae441c7509] Running
	I0926 23:37:05.052928   44052 system_pods.go:61] "kube-scheduler-test-preload-627811" [1cc59e5a-7f29-4d32-99ba-5d616cf3be2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:37:05.052938   44052 system_pods.go:61] "storage-provisioner" [661427d2-9638-47f9-bba5-2369178c0a98] Running
	I0926 23:37:05.052946   44052 system_pods.go:74] duration metric: took 9.290401ms to wait for pod list to return data ...
	I0926 23:37:05.052965   44052 default_sa.go:34] waiting for default service account to be created ...
	I0926 23:37:05.059023   44052 default_sa.go:45] found service account: "default"
	I0926 23:37:05.059046   44052 default_sa.go:55] duration metric: took 6.074784ms for default service account to be created ...
	I0926 23:37:05.059054   44052 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 23:37:05.064437   44052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 23:37:05.064618   44052 system_pods.go:86] 7 kube-system pods found
	I0926 23:37:05.064641   44052 system_pods.go:89] "coredns-668d6bf9bc-l42bz" [efa565eb-1043-4e91-b033-f3728fdc1e62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:37:05.064648   44052 system_pods.go:89] "etcd-test-preload-627811" [7fc594d7-bdf0-43f7-8da2-c0639475c610] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:37:05.064659   44052 system_pods.go:89] "kube-apiserver-test-preload-627811" [83ba3777-781c-4779-92b7-da50c077d756] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:37:05.064665   44052 system_pods.go:89] "kube-controller-manager-test-preload-627811" [efca7380-67ac-4070-8a22-198d21d2d0b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:37:05.064674   44052 system_pods.go:89] "kube-proxy-gmzk7" [471b4ccc-716d-4b39-aa1e-48ae441c7509] Running
	I0926 23:37:05.064679   44052 system_pods.go:89] "kube-scheduler-test-preload-627811" [1cc59e5a-7f29-4d32-99ba-5d616cf3be2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:37:05.064682   44052 system_pods.go:89] "storage-provisioner" [661427d2-9638-47f9-bba5-2369178c0a98] Running
	I0926 23:37:05.064694   44052 system_pods.go:126] duration metric: took 5.635262ms to wait for k8s-apps to be running ...
	I0926 23:37:05.064702   44052 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 23:37:05.064739   44052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:37:05.188239   44052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:37:05.266248   44052 main.go:141] libmachine: Making call to close driver server
	I0926 23:37:05.266280   44052 main.go:141] libmachine: (test-preload-627811) Calling .Close
	I0926 23:37:05.266302   44052 system_svc.go:56] duration metric: took 201.592032ms WaitForService to wait for kubelet
	I0926 23:37:05.266328   44052 kubeadm.go:586] duration metric: took 548.967559ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 23:37:05.266356   44052 node_conditions.go:102] verifying NodePressure condition ...
	I0926 23:37:05.266610   44052 main.go:141] libmachine: (test-preload-627811) DBG | Closing plugin on server side
	I0926 23:37:05.266667   44052 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:37:05.266682   44052 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:37:05.266696   44052 main.go:141] libmachine: Making call to close driver server
	I0926 23:37:05.266707   44052 main.go:141] libmachine: (test-preload-627811) Calling .Close
	I0926 23:37:05.266941   44052 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:37:05.266957   44052 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:37:05.266971   44052 main.go:141] libmachine: (test-preload-627811) DBG | Closing plugin on server side
	I0926 23:37:05.272187   44052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 23:37:05.272203   44052 node_conditions.go:123] node cpu capacity is 2
	I0926 23:37:05.272212   44052 node_conditions.go:105] duration metric: took 5.851209ms to run NodePressure ...
	I0926 23:37:05.272223   44052 start.go:241] waiting for startup goroutines ...
	I0926 23:37:05.275257   44052 main.go:141] libmachine: Making call to close driver server
	I0926 23:37:05.275277   44052 main.go:141] libmachine: (test-preload-627811) Calling .Close
	I0926 23:37:05.275548   44052 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:37:05.275581   44052 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:37:05.275603   44052 main.go:141] libmachine: (test-preload-627811) DBG | Closing plugin on server side
	I0926 23:37:05.930711   44052 main.go:141] libmachine: Making call to close driver server
	I0926 23:37:05.930730   44052 main.go:141] libmachine: (test-preload-627811) Calling .Close
	I0926 23:37:05.931053   44052 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:37:05.931071   44052 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:37:05.931081   44052 main.go:141] libmachine: Making call to close driver server
	I0926 23:37:05.931089   44052 main.go:141] libmachine: (test-preload-627811) Calling .Close
	I0926 23:37:05.931313   44052 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:37:05.931333   44052 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:37:05.931355   44052 main.go:141] libmachine: (test-preload-627811) DBG | Closing plugin on server side
	I0926 23:37:05.933793   44052 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0926 23:37:05.935241   44052 addons.go:514] duration metric: took 1.217833368s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0926 23:37:05.935297   44052 start.go:246] waiting for cluster config update ...
	I0926 23:37:05.935311   44052 start.go:255] writing updated cluster config ...
	I0926 23:37:05.935637   44052 ssh_runner.go:195] Run: rm -f paused
	I0926 23:37:05.944231   44052 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:37:05.944993   44052 kapi.go:59] client config for test-preload-627811: &rest.Config{Host:"https://192.168.39.68:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21642-6020/.minikube/profiles/test-preload-627811/client.crt", KeyFile:"/home/jenkins/minikube-integration/21642-6020/.minikube/profiles/test-preload-627811/client.key", CAFile:"/home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 23:37:05.951711   44052 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-l42bz" in "kube-system" namespace to be "Ready" or be gone ...
	W0926 23:37:07.956867   44052 pod_ready.go:104] pod "coredns-668d6bf9bc-l42bz" is not "Ready", error: <nil>
	W0926 23:37:09.958625   44052 pod_ready.go:104] pod "coredns-668d6bf9bc-l42bz" is not "Ready", error: <nil>
	I0926 23:37:10.457707   44052 pod_ready.go:94] pod "coredns-668d6bf9bc-l42bz" is "Ready"
	I0926 23:37:10.457732   44052 pod_ready.go:86] duration metric: took 4.505994215s for pod "coredns-668d6bf9bc-l42bz" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:37:10.460605   44052 pod_ready.go:83] waiting for pod "etcd-test-preload-627811" in "kube-system" namespace to be "Ready" or be gone ...
	W0926 23:37:12.468276   44052 pod_ready.go:104] pod "etcd-test-preload-627811" is not "Ready", error: <nil>
	I0926 23:37:12.966932   44052 pod_ready.go:94] pod "etcd-test-preload-627811" is "Ready"
	I0926 23:37:12.966971   44052 pod_ready.go:86] duration metric: took 2.506344665s for pod "etcd-test-preload-627811" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:37:12.969636   44052 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-627811" in "kube-system" namespace to be "Ready" or be gone ...
	W0926 23:37:14.976498   44052 pod_ready.go:104] pod "kube-apiserver-test-preload-627811" is not "Ready", error: <nil>
	I0926 23:37:16.976641   44052 pod_ready.go:94] pod "kube-apiserver-test-preload-627811" is "Ready"
	I0926 23:37:16.976666   44052 pod_ready.go:86] duration metric: took 4.007003602s for pod "kube-apiserver-test-preload-627811" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:37:16.979494   44052 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-627811" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:37:16.984713   44052 pod_ready.go:94] pod "kube-controller-manager-test-preload-627811" is "Ready"
	I0926 23:37:16.984730   44052 pod_ready.go:86] duration metric: took 5.218637ms for pod "kube-controller-manager-test-preload-627811" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:37:16.987194   44052 pod_ready.go:83] waiting for pod "kube-proxy-gmzk7" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:37:16.991128   44052 pod_ready.go:94] pod "kube-proxy-gmzk7" is "Ready"
	I0926 23:37:16.991149   44052 pod_ready.go:86] duration metric: took 3.932674ms for pod "kube-proxy-gmzk7" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:37:16.993239   44052 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-627811" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:37:18.500426   44052 pod_ready.go:94] pod "kube-scheduler-test-preload-627811" is "Ready"
	I0926 23:37:18.500455   44052 pod_ready.go:86] duration metric: took 1.507194099s for pod "kube-scheduler-test-preload-627811" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:37:18.500467   44052 pod_ready.go:40] duration metric: took 12.556196639s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:37:18.542188   44052 start.go:623] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I0926 23:37:18.543904   44052 out.go:203] 
	W0926 23:37:18.545204   44052 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I0926 23:37:18.546433   44052 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I0926 23:37:18.547763   44052 out.go:179] * Done! kubectl is now configured to use "test-preload-627811" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.467616753Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758929839467593674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=31d7e73f-5715-4d49-b81b-f74564fa107f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.468391497Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9526bfed-de61-45e1-a8fe-a92a69555f37 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.468529083Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9526bfed-de61-45e1-a8fe-a92a69555f37 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.469194836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a63235e8fe8113a30e0d591b96ac91d697d932bec836f43fff063d166e8cde5,PodSandboxId:6dfec3694d6fb67039c5f5c520a0d7b40a7dd662e8730b501b120a3dc7143dcf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1758929827255006426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l42bz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa565eb-1043-4e91-b033-f3728fdc1e62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257e0a4ef4017820b1d342b5f1f0314492d30e60817fd97c9fb63e0b62e26568,PodSandboxId:65abc3b7e70be5791dd907bcc3f12a9a1283299815299fd77ce7b4d41c8534e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1758929823610955600,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gmzk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 471b4ccc-716d-4b39-aa1e-48ae441c7509,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f857e03dc39af567cf9c7de1ca46cd37821adb97806a43740ecccca8cd7f99e2,PodSandboxId:aa01378e2b28650d22fd4dd4be476ecd8d02c424a62d3509f4d3aa02e9ca8975,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758929823597519504,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66
1427d2-9638-47f9-bba5-2369178c0a98,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82a1620ecda0fb4ffcd723169a7e3d7c6894e04c459e3615ab9b71798813118,PodSandboxId:6fc1fe302252e3650f61ed2fc967c4d81fc841f7d177e8fe6cbd3c448e0ca327,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1758929819624307884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-627811,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b382db5e5398019256fdc753b105f6b8,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fa82ca1b9bf965f90e8bd44b1fefb401b341b6655f7459560410cad3b23cd5,PodSandboxId:2a1fbc5f3c8eef819766223e1496c6e65c2afd1f266bbd13718f3d785971596a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1758929819598501128,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-627811,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011c8dbc019d617ea27fb193066f904e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19910a132ee53bb2cbd95705ffc1c21843d2880fb17b61fe93c7406b20119a3f,PodSandboxId:8ea41aa24fbccf8561f3672cb5b86c8471d23bc75344b906d035772e8a6ac1e5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1758929819576375153,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-627811,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4374a17ecff362c72e5c22934913ad96,},Annotations:map[string]str
ing{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b75b1121511b68266020bd75198ea279c00c090803258a82ad00111c0347a53,PodSandboxId:dca3d7e3c24353bf7c3201fa67d9597414b558ff1ddfcd7e92b9eccd0670652c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1758929819569627758,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-627811,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca1e3600efc5359d28d2c276b34e5c8,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9526bfed-de61-45e1-a8fe-a92a69555f37 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.511243329Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78daf9de-e931-42c4-8df1-4126e5af24aa name=/runtime.v1.RuntimeService/Version
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.511318807Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78daf9de-e931-42c4-8df1-4126e5af24aa name=/runtime.v1.RuntimeService/Version
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.512476178Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c124014-0403-449e-8462-f034e6fa986f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.513051170Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758929839513026732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c124014-0403-449e-8462-f034e6fa986f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.513693000Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40e2cc9b-3c10-4f53-a4db-e12ce5c80616 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.513760003Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40e2cc9b-3c10-4f53-a4db-e12ce5c80616 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.514021910Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a63235e8fe8113a30e0d591b96ac91d697d932bec836f43fff063d166e8cde5,PodSandboxId:6dfec3694d6fb67039c5f5c520a0d7b40a7dd662e8730b501b120a3dc7143dcf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1758929827255006426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l42bz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa565eb-1043-4e91-b033-f3728fdc1e62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257e0a4ef4017820b1d342b5f1f0314492d30e60817fd97c9fb63e0b62e26568,PodSandboxId:65abc3b7e70be5791dd907bcc3f12a9a1283299815299fd77ce7b4d41c8534e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1758929823610955600,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gmzk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 471b4ccc-716d-4b39-aa1e-48ae441c7509,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f857e03dc39af567cf9c7de1ca46cd37821adb97806a43740ecccca8cd7f99e2,PodSandboxId:aa01378e2b28650d22fd4dd4be476ecd8d02c424a62d3509f4d3aa02e9ca8975,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758929823597519504,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66
1427d2-9638-47f9-bba5-2369178c0a98,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82a1620ecda0fb4ffcd723169a7e3d7c6894e04c459e3615ab9b71798813118,PodSandboxId:6fc1fe302252e3650f61ed2fc967c4d81fc841f7d177e8fe6cbd3c448e0ca327,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1758929819624307884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-627811,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b382db5e5398019256fdc753b105f6b8,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fa82ca1b9bf965f90e8bd44b1fefb401b341b6655f7459560410cad3b23cd5,PodSandboxId:2a1fbc5f3c8eef819766223e1496c6e65c2afd1f266bbd13718f3d785971596a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1758929819598501128,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-627811,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011c8dbc019d617ea27fb193066f904e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19910a132ee53bb2cbd95705ffc1c21843d2880fb17b61fe93c7406b20119a3f,PodSandboxId:8ea41aa24fbccf8561f3672cb5b86c8471d23bc75344b906d035772e8a6ac1e5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1758929819576375153,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-627811,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4374a17ecff362c72e5c22934913ad96,},Annotations:map[string]str
ing{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b75b1121511b68266020bd75198ea279c00c090803258a82ad00111c0347a53,PodSandboxId:dca3d7e3c24353bf7c3201fa67d9597414b558ff1ddfcd7e92b9eccd0670652c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1758929819569627758,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-627811,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca1e3600efc5359d28d2c276b34e5c8,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40e2cc9b-3c10-4f53-a4db-e12ce5c80616 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.556511788Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=76c60cc9-a3cc-4f14-b74a-584176860df4 name=/runtime.v1.RuntimeService/Version
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.556603377Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=76c60cc9-a3cc-4f14-b74a-584176860df4 name=/runtime.v1.RuntimeService/Version
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.558342736Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78de7d76-7c5f-4a78-a1cd-07bf1291f0fb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.558751889Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758929839558726840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78de7d76-7c5f-4a78-a1cd-07bf1291f0fb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.559405159Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99b539d9-163b-4e5f-8644-213f1a83306e name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.559754380Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99b539d9-163b-4e5f-8644-213f1a83306e name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.560116974Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a63235e8fe8113a30e0d591b96ac91d697d932bec836f43fff063d166e8cde5,PodSandboxId:6dfec3694d6fb67039c5f5c520a0d7b40a7dd662e8730b501b120a3dc7143dcf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1758929827255006426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l42bz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa565eb-1043-4e91-b033-f3728fdc1e62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257e0a4ef4017820b1d342b5f1f0314492d30e60817fd97c9fb63e0b62e26568,PodSandboxId:65abc3b7e70be5791dd907bcc3f12a9a1283299815299fd77ce7b4d41c8534e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1758929823610955600,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gmzk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 471b4ccc-716d-4b39-aa1e-48ae441c7509,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f857e03dc39af567cf9c7de1ca46cd37821adb97806a43740ecccca8cd7f99e2,PodSandboxId:aa01378e2b28650d22fd4dd4be476ecd8d02c424a62d3509f4d3aa02e9ca8975,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758929823597519504,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66
1427d2-9638-47f9-bba5-2369178c0a98,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82a1620ecda0fb4ffcd723169a7e3d7c6894e04c459e3615ab9b71798813118,PodSandboxId:6fc1fe302252e3650f61ed2fc967c4d81fc841f7d177e8fe6cbd3c448e0ca327,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1758929819624307884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-627811,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b382db5e5398019256fdc753b105f6b8,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fa82ca1b9bf965f90e8bd44b1fefb401b341b6655f7459560410cad3b23cd5,PodSandboxId:2a1fbc5f3c8eef819766223e1496c6e65c2afd1f266bbd13718f3d785971596a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1758929819598501128,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-627811,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011c8dbc019d617ea27fb193066f904e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19910a132ee53bb2cbd95705ffc1c21843d2880fb17b61fe93c7406b20119a3f,PodSandboxId:8ea41aa24fbccf8561f3672cb5b86c8471d23bc75344b906d035772e8a6ac1e5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1758929819576375153,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-627811,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4374a17ecff362c72e5c22934913ad96,},Annotations:map[string]str
ing{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b75b1121511b68266020bd75198ea279c00c090803258a82ad00111c0347a53,PodSandboxId:dca3d7e3c24353bf7c3201fa67d9597414b558ff1ddfcd7e92b9eccd0670652c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1758929819569627758,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-627811,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca1e3600efc5359d28d2c276b34e5c8,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99b539d9-163b-4e5f-8644-213f1a83306e name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.597041080Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e15c4523-a0a3-4489-b303-85b5eec83d5c name=/runtime.v1.RuntimeService/Version
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.597334173Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e15c4523-a0a3-4489-b303-85b5eec83d5c name=/runtime.v1.RuntimeService/Version
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.599026386Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ddc1d7e4-7abc-4a56-9f3a-1d9f6b252304 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.600179834Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758929839600152564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ddc1d7e4-7abc-4a56-9f3a-1d9f6b252304 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.600885774Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e49c33d7-ed27-4ab5-a54f-2d36904b09fe name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.601014739Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e49c33d7-ed27-4ab5-a54f-2d36904b09fe name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:37:19 test-preload-627811 crio[828]: time="2025-09-26 23:37:19.601188971Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a63235e8fe8113a30e0d591b96ac91d697d932bec836f43fff063d166e8cde5,PodSandboxId:6dfec3694d6fb67039c5f5c520a0d7b40a7dd662e8730b501b120a3dc7143dcf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1758929827255006426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l42bz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa565eb-1043-4e91-b033-f3728fdc1e62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257e0a4ef4017820b1d342b5f1f0314492d30e60817fd97c9fb63e0b62e26568,PodSandboxId:65abc3b7e70be5791dd907bcc3f12a9a1283299815299fd77ce7b4d41c8534e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1758929823610955600,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gmzk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 471b4ccc-716d-4b39-aa1e-48ae441c7509,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f857e03dc39af567cf9c7de1ca46cd37821adb97806a43740ecccca8cd7f99e2,PodSandboxId:aa01378e2b28650d22fd4dd4be476ecd8d02c424a62d3509f4d3aa02e9ca8975,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758929823597519504,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66
1427d2-9638-47f9-bba5-2369178c0a98,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82a1620ecda0fb4ffcd723169a7e3d7c6894e04c459e3615ab9b71798813118,PodSandboxId:6fc1fe302252e3650f61ed2fc967c4d81fc841f7d177e8fe6cbd3c448e0ca327,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1758929819624307884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-627811,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b382db5e5398019256fdc753b105f6b8,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fa82ca1b9bf965f90e8bd44b1fefb401b341b6655f7459560410cad3b23cd5,PodSandboxId:2a1fbc5f3c8eef819766223e1496c6e65c2afd1f266bbd13718f3d785971596a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1758929819598501128,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-627811,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011c8dbc019d617ea27fb193066f904e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19910a132ee53bb2cbd95705ffc1c21843d2880fb17b61fe93c7406b20119a3f,PodSandboxId:8ea41aa24fbccf8561f3672cb5b86c8471d23bc75344b906d035772e8a6ac1e5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1758929819576375153,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-627811,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4374a17ecff362c72e5c22934913ad96,},Annotations:map[string]str
ing{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b75b1121511b68266020bd75198ea279c00c090803258a82ad00111c0347a53,PodSandboxId:dca3d7e3c24353bf7c3201fa67d9597414b558ff1ddfcd7e92b9eccd0670652c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1758929819569627758,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-627811,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca1e3600efc5359d28d2c276b34e5c8,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e49c33d7-ed27-4ab5-a54f-2d36904b09fe name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4a63235e8fe81       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   12 seconds ago      Running             coredns                   1                   6dfec3694d6fb       coredns-668d6bf9bc-l42bz
	257e0a4ef4017       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   16 seconds ago      Running             kube-proxy                1                   65abc3b7e70be       kube-proxy-gmzk7
	f857e03dc39af       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   aa01378e2b286       storage-provisioner
	a82a1620ecda0       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   20 seconds ago      Running             etcd                      1                   6fc1fe302252e       etcd-test-preload-627811
	54fa82ca1b9bf       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   20 seconds ago      Running             kube-scheduler            1                   2a1fbc5f3c8ee       kube-scheduler-test-preload-627811
	19910a132ee53       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   20 seconds ago      Running             kube-apiserver            1                   8ea41aa24fbcc       kube-apiserver-test-preload-627811
	3b75b1121511b       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   20 seconds ago      Running             kube-controller-manager   1                   dca3d7e3c2435       kube-controller-manager-test-preload-627811
	
	
	==> coredns [4a63235e8fe8113a30e0d591b96ac91d697d932bec836f43fff063d166e8cde5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35262 - 60020 "HINFO IN 667458709774847173.3814580857796335677. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.020367675s
	
	
	==> describe nodes <==
	Name:               test-preload-627811
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-627811
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=test-preload-627811
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T23_36_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 23:36:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-627811
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 23:37:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 23:37:04 +0000   Fri, 26 Sep 2025 23:36:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 23:37:04 +0000   Fri, 26 Sep 2025 23:36:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 23:37:04 +0000   Fri, 26 Sep 2025 23:36:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 23:37:04 +0000   Fri, 26 Sep 2025 23:37:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    test-preload-627811
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	System Info:
	  Machine ID:                 f05d7928370540b9a168a571110811ee
	  System UUID:                f05d7928-3705-40b9-a168-a571110811ee
	  Boot ID:                    69fca6d9-4d3b-4799-87d2-346cdd37930f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-l42bz                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     69s
	  kube-system                 etcd-test-preload-627811                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         73s
	  kube-system                 kube-apiserver-test-preload-627811             250m (12%)    0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-controller-manager-test-preload-627811    200m (10%)    0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-proxy-gmzk7                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 kube-scheduler-test-preload-627811             100m (5%)     0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 67s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Normal   NodeHasSufficientMemory  73s                kubelet          Node test-preload-627811 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    73s                kubelet          Node test-preload-627811 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s                kubelet          Node test-preload-627811 status is now: NodeHasSufficientPID
	  Normal   NodeReady                73s                kubelet          Node test-preload-627811 status is now: NodeReady
	  Normal   Starting                 73s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           70s                node-controller  Node test-preload-627811 event: Registered Node test-preload-627811 in Controller
	  Normal   Starting                 21s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-627811 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-627811 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-627811 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 16s                kubelet          Node test-preload-627811 has been rebooted, boot id: 69fca6d9-4d3b-4799-87d2-346cdd37930f
	  Normal   RegisteredNode           13s                node-controller  Node test-preload-627811 event: Registered Node test-preload-627811 in Controller
	
	
	==> dmesg <==
	[Sep26 23:36] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000040] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002137] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.042063] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.087396] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.108088] kauditd_printk_skb: 102 callbacks suppressed
	[Sep26 23:37] kauditd_printk_skb: 177 callbacks suppressed
	[  +3.214407] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [a82a1620ecda0fb4ffcd723169a7e3d7c6894e04c459e3615ab9b71798813118] <==
	{"level":"info","ts":"2025-09-26T23:37:00.130346Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68cd46418ae274f9","local-member-id":"821abe7be15f44a3","added-peer-id":"821abe7be15f44a3","added-peer-peer-urls":["https://192.168.39.68:2380"]}
	{"level":"info","ts":"2025-09-26T23:37:00.130506Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68cd46418ae274f9","local-member-id":"821abe7be15f44a3","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-26T23:37:00.130553Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-26T23:37:00.134019Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-26T23:37:00.142749Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-26T23:37:00.148973Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"821abe7be15f44a3","initial-advertise-peer-urls":["https://192.168.39.68:2380"],"listen-peer-urls":["https://192.168.39.68:2380"],"advertise-client-urls":["https://192.168.39.68:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.68:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-26T23:37:00.149120Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-26T23:37:00.144858Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.68:2380"}
	{"level":"info","ts":"2025-09-26T23:37:00.149329Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.68:2380"}
	{"level":"info","ts":"2025-09-26T23:37:01.690011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-26T23:37:01.690068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-26T23:37:01.690085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 received MsgPreVoteResp from 821abe7be15f44a3 at term 2"}
	{"level":"info","ts":"2025-09-26T23:37:01.690095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 became candidate at term 3"}
	{"level":"info","ts":"2025-09-26T23:37:01.690116Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 received MsgVoteResp from 821abe7be15f44a3 at term 3"}
	{"level":"info","ts":"2025-09-26T23:37:01.690126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 became leader at term 3"}
	{"level":"info","ts":"2025-09-26T23:37:01.690132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 821abe7be15f44a3 elected leader 821abe7be15f44a3 at term 3"}
	{"level":"info","ts":"2025-09-26T23:37:01.692483Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"821abe7be15f44a3","local-member-attributes":"{Name:test-preload-627811 ClientURLs:[https://192.168.39.68:2379]}","request-path":"/0/members/821abe7be15f44a3/attributes","cluster-id":"68cd46418ae274f9","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-26T23:37:01.692614Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-26T23:37:01.692722Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-26T23:37:01.693593Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-26T23:37:01.693706Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-26T23:37:01.693723Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-26T23:37:01.694192Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-26T23:37:01.694301Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-26T23:37:01.694814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.68:2379"}
	
	
	==> kernel <==
	 23:37:19 up 0 min,  0 users,  load average: 0.60, 0.18, 0.06
	Linux test-preload-627811 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [19910a132ee53bb2cbd95705ffc1c21843d2880fb17b61fe93c7406b20119a3f] <==
	I0926 23:37:02.947973       1 policy_source.go:240] refreshing policies
	I0926 23:37:02.958016       1 aggregator.go:171] initial CRD sync complete...
	I0926 23:37:02.958105       1 autoregister_controller.go:144] Starting autoregister controller
	I0926 23:37:02.958125       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0926 23:37:02.980811       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0926 23:37:03.007852       1 shared_informer.go:320] Caches are synced for configmaps
	I0926 23:37:03.008225       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0926 23:37:03.009453       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0926 23:37:03.009557       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0926 23:37:03.009818       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0926 23:37:03.010463       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0926 23:37:03.010563       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0926 23:37:03.012379       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0926 23:37:03.028174       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0926 23:37:03.051996       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0926 23:37:03.059731       1 cache.go:39] Caches are synced for autoregister controller
	I0926 23:37:03.282993       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0926 23:37:03.813007       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0926 23:37:04.506965       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0926 23:37:04.555709       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0926 23:37:04.584545       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0926 23:37:04.594302       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0926 23:37:06.124575       1 controller.go:615] quota admission added evaluator for: endpoints
	I0926 23:37:06.483644       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0926 23:37:06.532633       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3b75b1121511b68266020bd75198ea279c00c090803258a82ad00111c0347a53] <==
	I0926 23:37:06.115134       1 shared_informer.go:320] Caches are synced for PVC protection
	I0926 23:37:06.120252       1 shared_informer.go:320] Caches are synced for TTL
	I0926 23:37:06.120267       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0926 23:37:06.120423       1 shared_informer.go:320] Caches are synced for HPA
	I0926 23:37:06.120282       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0926 23:37:06.120514       1 shared_informer.go:320] Caches are synced for stateful set
	I0926 23:37:06.121362       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0926 23:37:06.122486       1 shared_informer.go:320] Caches are synced for disruption
	I0926 23:37:06.122501       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0926 23:37:06.123493       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0926 23:37:06.124939       1 shared_informer.go:320] Caches are synced for PV protection
	I0926 23:37:06.128049       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0926 23:37:06.128166       1 shared_informer.go:320] Caches are synced for resource quota
	I0926 23:37:06.130230       1 shared_informer.go:320] Caches are synced for daemon sets
	I0926 23:37:06.138205       1 shared_informer.go:320] Caches are synced for resource quota
	I0926 23:37:06.148485       1 shared_informer.go:320] Caches are synced for namespace
	I0926 23:37:06.151805       1 shared_informer.go:320] Caches are synced for garbage collector
	I0926 23:37:06.164361       1 shared_informer.go:320] Caches are synced for garbage collector
	I0926 23:37:06.164400       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0926 23:37:06.164408       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0926 23:37:06.496786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="389.162909ms"
	I0926 23:37:06.496954       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="119.993µs"
	I0926 23:37:07.380071       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="49.076µs"
	I0926 23:37:10.288468       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="14.813589ms"
	I0926 23:37:10.289285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="150.625µs"
	
	
	==> kube-proxy [257e0a4ef4017820b1d342b5f1f0314492d30e60817fd97c9fb63e0b62e26568] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0926 23:37:03.851008       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0926 23:37:03.867363       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.68"]
	E0926 23:37:03.867432       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 23:37:03.913819       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0926 23:37:03.913972       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0926 23:37:03.914023       1 server_linux.go:170] "Using iptables Proxier"
	I0926 23:37:03.920571       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 23:37:03.922018       1 server.go:497] "Version info" version="v1.32.0"
	I0926 23:37:03.922104       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 23:37:03.926272       1 config.go:199] "Starting service config controller"
	I0926 23:37:03.926813       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0926 23:37:03.926871       1 config.go:105] "Starting endpoint slice config controller"
	I0926 23:37:03.926888       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0926 23:37:03.928700       1 config.go:329] "Starting node config controller"
	I0926 23:37:03.928809       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0926 23:37:04.027517       1 shared_informer.go:320] Caches are synced for service config
	I0926 23:37:04.027419       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0926 23:37:04.029159       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [54fa82ca1b9bf965f90e8bd44b1fefb401b341b6655f7459560410cad3b23cd5] <==
	I0926 23:37:00.529501       1 serving.go:386] Generated self-signed cert in-memory
	W0926 23:37:02.929284       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0926 23:37:02.929999       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0926 23:37:02.930120       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0926 23:37:02.930148       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0926 23:37:03.007686       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I0926 23:37:03.015464       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 23:37:03.027120       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 23:37:03.027279       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0926 23:37:03.029395       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0926 23:37:03.029610       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0926 23:37:03.127460       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 26 23:37:03 test-preload-627811 kubelet[1154]: I0926 23:37:03.055497    1154 kubelet_node_status.go:79] "Successfully registered node" node="test-preload-627811"
	Sep 26 23:37:03 test-preload-627811 kubelet[1154]: I0926 23:37:03.055533    1154 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 26 23:37:03 test-preload-627811 kubelet[1154]: I0926 23:37:03.057968    1154 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 26 23:37:03 test-preload-627811 kubelet[1154]: I0926 23:37:03.060125    1154 setters.go:602] "Node became not ready" node="test-preload-627811" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-26T23:37:03Z","lastTransitionTime":"2025-09-26T23:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Sep 26 23:37:03 test-preload-627811 kubelet[1154]: E0926 23:37:03.061196    1154 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-627811\" already exists" pod="kube-system/kube-controller-manager-test-preload-627811"
	Sep 26 23:37:03 test-preload-627811 kubelet[1154]: I0926 23:37:03.061279    1154 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-627811"
	Sep 26 23:37:03 test-preload-627811 kubelet[1154]: E0926 23:37:03.076017    1154 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-627811\" already exists" pod="kube-system/kube-scheduler-test-preload-627811"
	Sep 26 23:37:03 test-preload-627811 kubelet[1154]: I0926 23:37:03.160411    1154 apiserver.go:52] "Watching apiserver"
	Sep 26 23:37:03 test-preload-627811 kubelet[1154]: E0926 23:37:03.166959    1154 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-l42bz" podUID="efa565eb-1043-4e91-b033-f3728fdc1e62"
	Sep 26 23:37:03 test-preload-627811 kubelet[1154]: I0926 23:37:03.187774    1154 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Sep 26 23:37:03 test-preload-627811 kubelet[1154]: I0926 23:37:03.271342    1154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/661427d2-9638-47f9-bba5-2369178c0a98-tmp\") pod \"storage-provisioner\" (UID: \"661427d2-9638-47f9-bba5-2369178c0a98\") " pod="kube-system/storage-provisioner"
	Sep 26 23:37:03 test-preload-627811 kubelet[1154]: I0926 23:37:03.271405    1154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/471b4ccc-716d-4b39-aa1e-48ae441c7509-xtables-lock\") pod \"kube-proxy-gmzk7\" (UID: \"471b4ccc-716d-4b39-aa1e-48ae441c7509\") " pod="kube-system/kube-proxy-gmzk7"
	Sep 26 23:37:03 test-preload-627811 kubelet[1154]: I0926 23:37:03.271443    1154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/471b4ccc-716d-4b39-aa1e-48ae441c7509-lib-modules\") pod \"kube-proxy-gmzk7\" (UID: \"471b4ccc-716d-4b39-aa1e-48ae441c7509\") " pod="kube-system/kube-proxy-gmzk7"
	Sep 26 23:37:03 test-preload-627811 kubelet[1154]: E0926 23:37:03.272472    1154 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 26 23:37:03 test-preload-627811 kubelet[1154]: E0926 23:37:03.272693    1154 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/efa565eb-1043-4e91-b033-f3728fdc1e62-config-volume podName:efa565eb-1043-4e91-b033-f3728fdc1e62 nodeName:}" failed. No retries permitted until 2025-09-26 23:37:03.772671196 +0000 UTC m=+5.712918808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/efa565eb-1043-4e91-b033-f3728fdc1e62-config-volume") pod "coredns-668d6bf9bc-l42bz" (UID: "efa565eb-1043-4e91-b033-f3728fdc1e62") : object "kube-system"/"coredns" not registered
	Sep 26 23:37:03 test-preload-627811 kubelet[1154]: E0926 23:37:03.775215    1154 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 26 23:37:03 test-preload-627811 kubelet[1154]: E0926 23:37:03.775299    1154 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/efa565eb-1043-4e91-b033-f3728fdc1e62-config-volume podName:efa565eb-1043-4e91-b033-f3728fdc1e62 nodeName:}" failed. No retries permitted until 2025-09-26 23:37:04.775285624 +0000 UTC m=+6.715533248 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/efa565eb-1043-4e91-b033-f3728fdc1e62-config-volume") pod "coredns-668d6bf9bc-l42bz" (UID: "efa565eb-1043-4e91-b033-f3728fdc1e62") : object "kube-system"/"coredns" not registered
	Sep 26 23:37:04 test-preload-627811 kubelet[1154]: I0926 23:37:04.752830    1154 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Sep 26 23:37:04 test-preload-627811 kubelet[1154]: E0926 23:37:04.785440    1154 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 26 23:37:04 test-preload-627811 kubelet[1154]: E0926 23:37:04.785786    1154 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/efa565eb-1043-4e91-b033-f3728fdc1e62-config-volume podName:efa565eb-1043-4e91-b033-f3728fdc1e62 nodeName:}" failed. No retries permitted until 2025-09-26 23:37:06.785768666 +0000 UTC m=+8.726016283 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/efa565eb-1043-4e91-b033-f3728fdc1e62-config-volume") pod "coredns-668d6bf9bc-l42bz" (UID: "efa565eb-1043-4e91-b033-f3728fdc1e62") : object "kube-system"/"coredns" not registered
	Sep 26 23:37:08 test-preload-627811 kubelet[1154]: E0926 23:37:08.275889    1154 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758929828275434763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 26 23:37:08 test-preload-627811 kubelet[1154]: E0926 23:37:08.276242    1154 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758929828275434763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 26 23:37:10 test-preload-627811 kubelet[1154]: I0926 23:37:10.257424    1154 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 26 23:37:18 test-preload-627811 kubelet[1154]: E0926 23:37:18.281706    1154 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758929838280812524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 26 23:37:18 test-preload-627811 kubelet[1154]: E0926 23:37:18.281999    1154 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758929838280812524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [f857e03dc39af567cf9c7de1ca46cd37821adb97806a43740ecccca8cd7f99e2] <==
	I0926 23:37:03.712503       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-627811 -n test-preload-627811
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-627811 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-627811" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-627811
--- FAIL: TestPreload (128.39s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (48.35s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-298014 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-298014 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (43.465930185s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-298014] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-298014" primary control-plane node in "pause-298014" cluster
	* Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-298014" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 23:42:31.273052   48726 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:42:31.273151   48726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:42:31.273158   48726 out.go:374] Setting ErrFile to fd 2...
	I0926 23:42:31.273164   48726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:42:31.273412   48726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	I0926 23:42:31.273943   48726 out.go:368] Setting JSON to false
	I0926 23:42:31.275009   48726 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5096,"bootTime":1758925055,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 23:42:31.275093   48726 start.go:140] virtualization: kvm guest
	I0926 23:42:31.277107   48726 out.go:179] * [pause-298014] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 23:42:31.278362   48726 notify.go:220] Checking for updates...
	I0926 23:42:31.278392   48726 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 23:42:31.279764   48726 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 23:42:31.282090   48726 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 23:42:31.283337   48726 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 23:42:31.284678   48726 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 23:42:31.285802   48726 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 23:42:31.287615   48726 config.go:182] Loaded profile config "pause-298014": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:42:31.288223   48726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:42:31.288330   48726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:42:31.308422   48726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35115
	I0926 23:42:31.309158   48726 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:42:31.309781   48726 main.go:141] libmachine: Using API Version  1
	I0926 23:42:31.309807   48726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:42:31.310297   48726 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:42:31.310476   48726 main.go:141] libmachine: (pause-298014) Calling .DriverName
	I0926 23:42:31.310807   48726 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 23:42:31.311118   48726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:42:31.311172   48726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:42:31.331713   48726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40209
	I0926 23:42:31.332344   48726 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:42:31.333047   48726 main.go:141] libmachine: Using API Version  1
	I0926 23:42:31.333105   48726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:42:31.333617   48726 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:42:31.333871   48726 main.go:141] libmachine: (pause-298014) Calling .DriverName
	I0926 23:42:31.379572   48726 out.go:179] * Using the kvm2 driver based on existing profile
	I0926 23:42:31.380789   48726 start.go:304] selected driver: kvm2
	I0926 23:42:31.380806   48726 start.go:924] validating driver "kvm2" against &{Name:pause-298014 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterN
ame:pause-298014 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.242 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-de
vice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 23:42:31.381017   48726 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 23:42:31.381366   48726 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 23:42:31.381506   48726 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 23:42:31.397769   48726 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 23:42:31.397809   48726 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 23:42:31.413245   48726 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 23:42:31.414158   48726 cni.go:84] Creating CNI manager for ""
	I0926 23:42:31.414218   48726 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 23:42:31.414286   48726 start.go:348] cluster config:
	{Name:pause-298014 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-298014 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.242 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry
:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 23:42:31.414449   48726 iso.go:125] acquiring lock: {Name:mk665cb8117fd96bfc46b1e5a29611848cf59d97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 23:42:31.416269   48726 out.go:179] * Starting "pause-298014" primary control-plane node in "pause-298014" cluster
	I0926 23:42:31.417444   48726 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 23:42:31.417483   48726 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0926 23:42:31.417492   48726 cache.go:58] Caching tarball of preloaded images
	I0926 23:42:31.417619   48726 preload.go:172] Found /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0926 23:42:31.417634   48726 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0926 23:42:31.417809   48726 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/pause-298014/config.json ...
	I0926 23:42:31.418143   48726 start.go:360] acquireMachinesLock for pause-298014: {Name:mk2abc374bcfc09d0b998f1b70bb443182c23d46 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 23:42:37.371181   48726 start.go:364] duration metric: took 5.95299961s to acquireMachinesLock for "pause-298014"
	I0926 23:42:37.371237   48726 start.go:96] Skipping create...Using existing machine configuration
	I0926 23:42:37.371245   48726 fix.go:54] fixHost starting: 
	I0926 23:42:37.371729   48726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:42:37.371781   48726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:42:37.389017   48726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40265
	I0926 23:42:37.389500   48726 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:42:37.390017   48726 main.go:141] libmachine: Using API Version  1
	I0926 23:42:37.390047   48726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:42:37.390376   48726 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:42:37.390577   48726 main.go:141] libmachine: (pause-298014) Calling .DriverName
	I0926 23:42:37.390747   48726 main.go:141] libmachine: (pause-298014) Calling .GetState
	I0926 23:42:37.392854   48726 fix.go:112] recreateIfNeeded on pause-298014: state=Running err=<nil>
	W0926 23:42:37.392876   48726 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 23:42:37.394622   48726 out.go:252] * Updating the running kvm2 "pause-298014" VM ...
	I0926 23:42:37.394653   48726 machine.go:93] provisionDockerMachine start ...
	I0926 23:42:37.394672   48726 main.go:141] libmachine: (pause-298014) Calling .DriverName
	I0926 23:42:37.394942   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHHostname
	I0926 23:42:37.398236   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:37.398689   48726 main.go:141] libmachine: (pause-298014) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:45:c7", ip: ""} in network mk-pause-298014: {Iface:virbr2 ExpiryTime:2025-09-27 00:41:21 +0000 UTC Type:0 Mac:52:54:00:50:45:c7 Iaid: IPaddr:192.168.83.242 Prefix:24 Hostname:pause-298014 Clientid:01:52:54:00:50:45:c7}
	I0926 23:42:37.398719   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined IP address 192.168.83.242 and MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:37.398983   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHPort
	I0926 23:42:37.399168   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHKeyPath
	I0926 23:42:37.399327   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHKeyPath
	I0926 23:42:37.399456   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHUsername
	I0926 23:42:37.399630   48726 main.go:141] libmachine: Using SSH client type: native
	I0926 23:42:37.400015   48726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.83.242 22 <nil> <nil>}
	I0926 23:42:37.400036   48726 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 23:42:37.521205   48726 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-298014
	
	I0926 23:42:37.521238   48726 main.go:141] libmachine: (pause-298014) Calling .GetMachineName
	I0926 23:42:37.521518   48726 buildroot.go:166] provisioning hostname "pause-298014"
	I0926 23:42:37.521554   48726 main.go:141] libmachine: (pause-298014) Calling .GetMachineName
	I0926 23:42:37.521738   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHHostname
	I0926 23:42:37.525302   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:37.525759   48726 main.go:141] libmachine: (pause-298014) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:45:c7", ip: ""} in network mk-pause-298014: {Iface:virbr2 ExpiryTime:2025-09-27 00:41:21 +0000 UTC Type:0 Mac:52:54:00:50:45:c7 Iaid: IPaddr:192.168.83.242 Prefix:24 Hostname:pause-298014 Clientid:01:52:54:00:50:45:c7}
	I0926 23:42:37.525814   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined IP address 192.168.83.242 and MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:37.526037   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHPort
	I0926 23:42:37.526251   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHKeyPath
	I0926 23:42:37.526425   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHKeyPath
	I0926 23:42:37.526651   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHUsername
	I0926 23:42:37.526822   48726 main.go:141] libmachine: Using SSH client type: native
	I0926 23:42:37.527169   48726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.83.242 22 <nil> <nil>}
	I0926 23:42:37.527189   48726 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-298014 && echo "pause-298014" | sudo tee /etc/hostname
	I0926 23:42:37.671638   48726 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-298014
	
	I0926 23:42:37.671671   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHHostname
	I0926 23:42:37.675093   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:37.675535   48726 main.go:141] libmachine: (pause-298014) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:45:c7", ip: ""} in network mk-pause-298014: {Iface:virbr2 ExpiryTime:2025-09-27 00:41:21 +0000 UTC Type:0 Mac:52:54:00:50:45:c7 Iaid: IPaddr:192.168.83.242 Prefix:24 Hostname:pause-298014 Clientid:01:52:54:00:50:45:c7}
	I0926 23:42:37.675564   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined IP address 192.168.83.242 and MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:37.675787   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHPort
	I0926 23:42:37.676028   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHKeyPath
	I0926 23:42:37.676209   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHKeyPath
	I0926 23:42:37.676360   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHUsername
	I0926 23:42:37.676516   48726 main.go:141] libmachine: Using SSH client type: native
	I0926 23:42:37.676706   48726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.83.242 22 <nil> <nil>}
	I0926 23:42:37.676722   48726 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-298014' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-298014/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-298014' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 23:42:37.803536   48726 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 23:42:37.803575   48726 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21642-6020/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-6020/.minikube}
	I0926 23:42:37.803622   48726 buildroot.go:174] setting up certificates
	I0926 23:42:37.803643   48726 provision.go:84] configureAuth start
	I0926 23:42:37.803657   48726 main.go:141] libmachine: (pause-298014) Calling .GetMachineName
	I0926 23:42:37.804111   48726 main.go:141] libmachine: (pause-298014) Calling .GetIP
	I0926 23:42:37.808048   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:37.808512   48726 main.go:141] libmachine: (pause-298014) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:45:c7", ip: ""} in network mk-pause-298014: {Iface:virbr2 ExpiryTime:2025-09-27 00:41:21 +0000 UTC Type:0 Mac:52:54:00:50:45:c7 Iaid: IPaddr:192.168.83.242 Prefix:24 Hostname:pause-298014 Clientid:01:52:54:00:50:45:c7}
	I0926 23:42:37.808539   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined IP address 192.168.83.242 and MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:37.808779   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHHostname
	I0926 23:42:37.811734   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:37.812127   48726 main.go:141] libmachine: (pause-298014) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:45:c7", ip: ""} in network mk-pause-298014: {Iface:virbr2 ExpiryTime:2025-09-27 00:41:21 +0000 UTC Type:0 Mac:52:54:00:50:45:c7 Iaid: IPaddr:192.168.83.242 Prefix:24 Hostname:pause-298014 Clientid:01:52:54:00:50:45:c7}
	I0926 23:42:37.812157   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined IP address 192.168.83.242 and MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:37.812373   48726 provision.go:143] copyHostCerts
	I0926 23:42:37.812435   48726 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem, removing ...
	I0926 23:42:37.812455   48726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem
	I0926 23:42:37.812546   48726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem (1082 bytes)
	I0926 23:42:37.812699   48726 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem, removing ...
	I0926 23:42:37.812713   48726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem
	I0926 23:42:37.812747   48726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem (1123 bytes)
	I0926 23:42:37.812877   48726 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem, removing ...
	I0926 23:42:37.812887   48726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem
	I0926 23:42:37.812918   48726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem (1675 bytes)
	I0926 23:42:37.813004   48726 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem org=jenkins.pause-298014 san=[127.0.0.1 192.168.83.242 localhost minikube pause-298014]
	I0926 23:42:38.039432   48726 provision.go:177] copyRemoteCerts
	I0926 23:42:38.039514   48726 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 23:42:38.039542   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHHostname
	I0926 23:42:38.043504   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:38.044044   48726 main.go:141] libmachine: (pause-298014) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:45:c7", ip: ""} in network mk-pause-298014: {Iface:virbr2 ExpiryTime:2025-09-27 00:41:21 +0000 UTC Type:0 Mac:52:54:00:50:45:c7 Iaid: IPaddr:192.168.83.242 Prefix:24 Hostname:pause-298014 Clientid:01:52:54:00:50:45:c7}
	I0926 23:42:38.044077   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined IP address 192.168.83.242 and MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:38.044266   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHPort
	I0926 23:42:38.044510   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHKeyPath
	I0926 23:42:38.044681   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHUsername
	I0926 23:42:38.044875   48726 sshutil.go:53] new ssh client: &{IP:192.168.83.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/pause-298014/id_rsa Username:docker}
	I0926 23:42:38.150308   48726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 23:42:38.190410   48726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0926 23:42:38.231690   48726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 23:42:38.270440   48726 provision.go:87] duration metric: took 466.782061ms to configureAuth
	I0926 23:42:38.270470   48726 buildroot.go:189] setting minikube options for container-runtime
	I0926 23:42:38.270732   48726 config.go:182] Loaded profile config "pause-298014": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:42:38.270816   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHHostname
	I0926 23:42:38.274150   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:38.274581   48726 main.go:141] libmachine: (pause-298014) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:45:c7", ip: ""} in network mk-pause-298014: {Iface:virbr2 ExpiryTime:2025-09-27 00:41:21 +0000 UTC Type:0 Mac:52:54:00:50:45:c7 Iaid: IPaddr:192.168.83.242 Prefix:24 Hostname:pause-298014 Clientid:01:52:54:00:50:45:c7}
	I0926 23:42:38.274623   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined IP address 192.168.83.242 and MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:38.274901   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHPort
	I0926 23:42:38.275075   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHKeyPath
	I0926 23:42:38.275227   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHKeyPath
	I0926 23:42:38.275363   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHUsername
	I0926 23:42:38.275554   48726 main.go:141] libmachine: Using SSH client type: native
	I0926 23:42:38.275866   48726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.83.242 22 <nil> <nil>}
	I0926 23:42:38.275892   48726 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0926 23:42:44.005002   48726 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0926 23:42:44.005030   48726 machine.go:96] duration metric: took 6.610368338s to provisionDockerMachine
	I0926 23:42:44.005045   48726 start.go:293] postStartSetup for "pause-298014" (driver="kvm2")
	I0926 23:42:44.005058   48726 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 23:42:44.005111   48726 main.go:141] libmachine: (pause-298014) Calling .DriverName
	I0926 23:42:44.005507   48726 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 23:42:44.005542   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHHostname
	I0926 23:42:44.009650   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:44.010164   48726 main.go:141] libmachine: (pause-298014) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:45:c7", ip: ""} in network mk-pause-298014: {Iface:virbr2 ExpiryTime:2025-09-27 00:41:21 +0000 UTC Type:0 Mac:52:54:00:50:45:c7 Iaid: IPaddr:192.168.83.242 Prefix:24 Hostname:pause-298014 Clientid:01:52:54:00:50:45:c7}
	I0926 23:42:44.010213   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined IP address 192.168.83.242 and MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:44.010456   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHPort
	I0926 23:42:44.010688   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHKeyPath
	I0926 23:42:44.010883   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHUsername
	I0926 23:42:44.011073   48726 sshutil.go:53] new ssh client: &{IP:192.168.83.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/pause-298014/id_rsa Username:docker}
	I0926 23:42:44.104617   48726 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 23:42:44.110369   48726 info.go:137] Remote host: Buildroot 2025.02
	I0926 23:42:44.110405   48726 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-6020/.minikube/addons for local assets ...
	I0926 23:42:44.110475   48726 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-6020/.minikube/files for local assets ...
	I0926 23:42:44.110581   48726 filesync.go:149] local asset: /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem -> 99142.pem in /etc/ssl/certs
	I0926 23:42:44.110699   48726 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 23:42:44.124090   48726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem --> /etc/ssl/certs/99142.pem (1708 bytes)
	I0926 23:42:44.165381   48726 start.go:296] duration metric: took 160.325355ms for postStartSetup
	I0926 23:42:44.165415   48726 fix.go:56] duration metric: took 6.794171185s for fixHost
	I0926 23:42:44.165431   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHHostname
	I0926 23:42:44.168495   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:44.168927   48726 main.go:141] libmachine: (pause-298014) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:45:c7", ip: ""} in network mk-pause-298014: {Iface:virbr2 ExpiryTime:2025-09-27 00:41:21 +0000 UTC Type:0 Mac:52:54:00:50:45:c7 Iaid: IPaddr:192.168.83.242 Prefix:24 Hostname:pause-298014 Clientid:01:52:54:00:50:45:c7}
	I0926 23:42:44.168958   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined IP address 192.168.83.242 and MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:44.169180   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHPort
	I0926 23:42:44.169381   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHKeyPath
	I0926 23:42:44.169554   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHKeyPath
	I0926 23:42:44.169694   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHUsername
	I0926 23:42:44.169906   48726 main.go:141] libmachine: Using SSH client type: native
	I0926 23:42:44.170126   48726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.83.242 22 <nil> <nil>}
	I0926 23:42:44.170142   48726 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 23:42:44.292519   48726 main.go:141] libmachine: SSH cmd err, output: <nil>: 1758930164.289586099
	
	I0926 23:42:44.292561   48726 fix.go:216] guest clock: 1758930164.289586099
	I0926 23:42:44.292570   48726 fix.go:229] Guest: 2025-09-26 23:42:44.289586099 +0000 UTC Remote: 2025-09-26 23:42:44.16541847 +0000 UTC m=+12.940009164 (delta=124.167629ms)
	I0926 23:42:44.292593   48726 fix.go:200] guest clock delta is within tolerance: 124.167629ms
	I0926 23:42:44.292600   48726 start.go:83] releasing machines lock for "pause-298014", held for 6.921383634s
	I0926 23:42:44.292629   48726 main.go:141] libmachine: (pause-298014) Calling .DriverName
	I0926 23:42:44.292920   48726 main.go:141] libmachine: (pause-298014) Calling .GetIP
	I0926 23:42:44.297092   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:44.297591   48726 main.go:141] libmachine: (pause-298014) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:45:c7", ip: ""} in network mk-pause-298014: {Iface:virbr2 ExpiryTime:2025-09-27 00:41:21 +0000 UTC Type:0 Mac:52:54:00:50:45:c7 Iaid: IPaddr:192.168.83.242 Prefix:24 Hostname:pause-298014 Clientid:01:52:54:00:50:45:c7}
	I0926 23:42:44.297618   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined IP address 192.168.83.242 and MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:44.297844   48726 main.go:141] libmachine: (pause-298014) Calling .DriverName
	I0926 23:42:44.298400   48726 main.go:141] libmachine: (pause-298014) Calling .DriverName
	I0926 23:42:44.298677   48726 main.go:141] libmachine: (pause-298014) Calling .DriverName
	I0926 23:42:44.298784   48726 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 23:42:44.298947   48726 ssh_runner.go:195] Run: cat /version.json
	I0926 23:42:44.298968   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHHostname
	I0926 23:42:44.299027   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHHostname
	I0926 23:42:44.303031   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:44.303129   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:44.303710   48726 main.go:141] libmachine: (pause-298014) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:45:c7", ip: ""} in network mk-pause-298014: {Iface:virbr2 ExpiryTime:2025-09-27 00:41:21 +0000 UTC Type:0 Mac:52:54:00:50:45:c7 Iaid: IPaddr:192.168.83.242 Prefix:24 Hostname:pause-298014 Clientid:01:52:54:00:50:45:c7}
	I0926 23:42:44.303748   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined IP address 192.168.83.242 and MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:44.303784   48726 main.go:141] libmachine: (pause-298014) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:45:c7", ip: ""} in network mk-pause-298014: {Iface:virbr2 ExpiryTime:2025-09-27 00:41:21 +0000 UTC Type:0 Mac:52:54:00:50:45:c7 Iaid: IPaddr:192.168.83.242 Prefix:24 Hostname:pause-298014 Clientid:01:52:54:00:50:45:c7}
	I0926 23:42:44.303797   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined IP address 192.168.83.242 and MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:44.303993   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHPort
	I0926 23:42:44.304201   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHKeyPath
	I0926 23:42:44.304233   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHPort
	I0926 23:42:44.304359   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHKeyPath
	I0926 23:42:44.304375   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHUsername
	I0926 23:42:44.304510   48726 main.go:141] libmachine: (pause-298014) Calling .GetSSHUsername
	I0926 23:42:44.304566   48726 sshutil.go:53] new ssh client: &{IP:192.168.83.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/pause-298014/id_rsa Username:docker}
	I0926 23:42:44.304643   48726 sshutil.go:53] new ssh client: &{IP:192.168.83.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/pause-298014/id_rsa Username:docker}
	I0926 23:42:44.402798   48726 ssh_runner.go:195] Run: systemctl --version
	I0926 23:42:44.432309   48726 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0926 23:42:44.611307   48726 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 23:42:44.626397   48726 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 23:42:44.626470   48726 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 23:42:44.643516   48726 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0926 23:42:44.643543   48726 start.go:495] detecting cgroup driver to use...
	I0926 23:42:44.643608   48726 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 23:42:44.674629   48726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 23:42:44.704061   48726 docker.go:218] disabling cri-docker service (if available) ...
	I0926 23:42:44.704128   48726 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0926 23:42:44.738750   48726 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0926 23:42:44.762260   48726 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0926 23:42:45.016690   48726 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0926 23:42:45.289282   48726 docker.go:234] disabling docker service ...
	I0926 23:42:45.289369   48726 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0926 23:42:45.334431   48726 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0926 23:42:45.360920   48726 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0926 23:42:45.607290   48726 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0926 23:42:45.839162   48726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 23:42:45.859987   48726 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 23:42:45.893109   48726 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0926 23:42:45.893187   48726 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:42:45.911408   48726 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0926 23:42:45.911462   48726 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:42:45.931463   48726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:42:45.950173   48726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:42:45.971595   48726 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 23:42:45.991777   48726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:42:46.012193   48726 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:42:46.039880   48726 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:42:46.059750   48726 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 23:42:46.076805   48726 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 23:42:46.095531   48726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:42:46.305284   48726 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0926 23:42:46.654056   48726 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0926 23:42:46.654144   48726 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0926 23:42:46.662985   48726 start.go:563] Will wait 60s for crictl version
	I0926 23:42:46.663066   48726 ssh_runner.go:195] Run: which crictl
	I0926 23:42:46.668547   48726 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 23:42:46.735692   48726 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0926 23:42:46.735773   48726 ssh_runner.go:195] Run: crio --version
	I0926 23:42:46.782220   48726 ssh_runner.go:195] Run: crio --version
	I0926 23:42:46.828802   48726 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0926 23:42:46.830090   48726 main.go:141] libmachine: (pause-298014) Calling .GetIP
	I0926 23:42:46.833318   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:46.833740   48726 main.go:141] libmachine: (pause-298014) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:45:c7", ip: ""} in network mk-pause-298014: {Iface:virbr2 ExpiryTime:2025-09-27 00:41:21 +0000 UTC Type:0 Mac:52:54:00:50:45:c7 Iaid: IPaddr:192.168.83.242 Prefix:24 Hostname:pause-298014 Clientid:01:52:54:00:50:45:c7}
	I0926 23:42:46.833775   48726 main.go:141] libmachine: (pause-298014) DBG | domain pause-298014 has defined IP address 192.168.83.242 and MAC address 52:54:00:50:45:c7 in network mk-pause-298014
	I0926 23:42:46.834096   48726 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0926 23:42:46.841315   48726 kubeadm.go:883] updating cluster {Name:pause-298014 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-29801
4 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.242 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fal
se olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 23:42:46.841448   48726 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 23:42:46.841486   48726 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:42:46.894277   48726 crio.go:514] all images are preloaded for cri-o runtime.
	I0926 23:42:46.894445   48726 crio.go:433] Images already preloaded, skipping extraction
	I0926 23:42:46.894507   48726 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:42:46.948343   48726 crio.go:514] all images are preloaded for cri-o runtime.
	I0926 23:42:46.948365   48726 cache_images.go:85] Images are preloaded, skipping loading
	I0926 23:42:46.948374   48726 kubeadm.go:934] updating node { 192.168.83.242 8443 v1.34.0 crio true true} ...
	I0926 23:42:46.948474   48726 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-298014 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.242
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:pause-298014 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 23:42:46.948559   48726 ssh_runner.go:195] Run: crio config
	I0926 23:42:47.011888   48726 cni.go:84] Creating CNI manager for ""
	I0926 23:42:47.011922   48726 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 23:42:47.011945   48726 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 23:42:47.011976   48726 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.242 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-298014 NodeName:pause-298014 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.242"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.242 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 23:42:47.012179   48726 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.242
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-298014"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.242"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.242"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 23:42:47.012248   48726 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0926 23:42:47.033512   48726 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 23:42:47.033583   48726 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 23:42:47.049899   48726 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0926 23:42:47.082621   48726 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 23:42:47.111416   48726 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I0926 23:42:47.143451   48726 ssh_runner.go:195] Run: grep 192.168.83.242	control-plane.minikube.internal$ /etc/hosts
	I0926 23:42:47.148738   48726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:42:47.378812   48726 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:42:47.404222   48726 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/pause-298014 for IP: 192.168.83.242
	I0926 23:42:47.404245   48726 certs.go:195] generating shared ca certs ...
	I0926 23:42:47.404264   48726 certs.go:227] acquiring lock for ca certs: {Name:mk9e164f84dd227cf84a459eec91beae2bb75a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:42:47.404419   48726 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key
	I0926 23:42:47.404482   48726 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key
	I0926 23:42:47.404513   48726 certs.go:257] generating profile certs ...
	I0926 23:42:47.404635   48726 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/pause-298014/client.key
	I0926 23:42:47.404706   48726 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/pause-298014/apiserver.key.cea94ce8
	I0926 23:42:47.404764   48726 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/pause-298014/proxy-client.key
	I0926 23:42:47.404941   48726 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914.pem (1338 bytes)
	W0926 23:42:47.404993   48726 certs.go:480] ignoring /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914_empty.pem, impossibly tiny 0 bytes
	I0926 23:42:47.405008   48726 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem (1679 bytes)
	I0926 23:42:47.405044   48726 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem (1082 bytes)
	I0926 23:42:47.405083   48726 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem (1123 bytes)
	I0926 23:42:47.405117   48726 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem (1675 bytes)
	I0926 23:42:47.405180   48726 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem (1708 bytes)
	I0926 23:42:47.406118   48726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 23:42:47.445609   48726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0926 23:42:47.483915   48726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 23:42:47.517819   48726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 23:42:47.560676   48726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/pause-298014/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0926 23:42:47.600609   48726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/pause-298014/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 23:42:47.727388   48726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/pause-298014/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 23:42:47.798940   48726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/pause-298014/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 23:42:47.886974   48726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 23:42:47.957538   48726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914.pem --> /usr/share/ca-certificates/9914.pem (1338 bytes)
	I0926 23:42:48.028569   48726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem --> /usr/share/ca-certificates/99142.pem (1708 bytes)
	I0926 23:42:48.142148   48726 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 23:42:48.226431   48726 ssh_runner.go:195] Run: openssl version
	I0926 23:42:48.241563   48726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 23:42:48.271136   48726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:42:48.282185   48726 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:42:48.282255   48726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:42:48.297432   48726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 23:42:48.334228   48726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9914.pem && ln -fs /usr/share/ca-certificates/9914.pem /etc/ssl/certs/9914.pem"
	I0926 23:42:48.363617   48726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9914.pem
	I0926 23:42:48.377082   48726 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 26 22:43 /usr/share/ca-certificates/9914.pem
	I0926 23:42:48.377151   48726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9914.pem
	I0926 23:42:48.391837   48726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9914.pem /etc/ssl/certs/51391683.0"
	I0926 23:42:48.419713   48726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99142.pem && ln -fs /usr/share/ca-certificates/99142.pem /etc/ssl/certs/99142.pem"
	I0926 23:42:48.451053   48726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99142.pem
	I0926 23:42:48.465365   48726 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 26 22:43 /usr/share/ca-certificates/99142.pem
	I0926 23:42:48.465435   48726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99142.pem
	I0926 23:42:48.487349   48726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99142.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 23:42:48.521248   48726 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 23:42:48.539596   48726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0926 23:42:48.557352   48726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0926 23:42:48.571739   48726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0926 23:42:48.585623   48726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0926 23:42:48.603491   48726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0926 23:42:48.624456   48726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0926 23:42:48.647286   48726 kubeadm.go:400] StartCluster: {Name:pause-298014 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-298014 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.242 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 23:42:48.647395   48726 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0926 23:42:48.647440   48726 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0926 23:42:48.874119   48726 cri.go:89] found id: "1567d2e11655dc909f2f494668cacd1951e28df8f20614765d182ca48a60ecb5"
	I0926 23:42:48.874147   48726 cri.go:89] found id: "51dc69520ea563551ffb542f8acc0a9060967383c75e5f980c2b4882cd666437"
	I0926 23:42:48.874153   48726 cri.go:89] found id: "645856b5f963235624cf1b074088fc311ed16a9d7a3aa3d34cb9f5291ea4d996"
	I0926 23:42:48.874158   48726 cri.go:89] found id: "168bab96e50b2f889b38634530403e8a10ae45bb9fd35cff73d3214501ddcb1c"
	I0926 23:42:48.874163   48726 cri.go:89] found id: "2b15910803a545e4a869d6f43ce6b53ef32a6a2034fa3c82a31296891c2caa16"
	I0926 23:42:48.874167   48726 cri.go:89] found id: "4291340e3901ff6ccd5af3a77e5ec33802ebc6e8e947a6db005215772968da3c"
	I0926 23:42:48.874170   48726 cri.go:89] found id: "f967ebe7302f5cf4de8213a6b3a0c7a4436980ecd58e590ba1e4bd41c75d9839"
	I0926 23:42:48.874173   48726 cri.go:89] found id: ""
	I0926 23:42:48.874220   48726 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-298014 -n pause-298014
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-298014 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-298014 logs -n 25: (1.65463282s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                               │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-421834 sudo systemctl status kubelet --all --full --no-pager                                                           │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo systemctl cat kubelet --no-pager                                                                           │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo journalctl -xeu kubelet --all --full --no-pager                                                            │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo cat /etc/kubernetes/kubelet.conf                                                                           │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo cat /var/lib/kubelet/config.yaml                                                                           │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo systemctl status docker --all --full --no-pager                                                            │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo systemctl cat docker --no-pager                                                                            │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo cat /etc/docker/daemon.json                                                                                │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo docker system info                                                                                         │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo systemctl status cri-docker --all --full --no-pager                                                        │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo systemctl cat cri-docker --no-pager                                                                        │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                   │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo cat /usr/lib/systemd/system/cri-docker.service                                                             │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo cri-dockerd --version                                                                                      │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo systemctl status containerd --all --full --no-pager                                                        │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo systemctl cat containerd --no-pager                                                                        │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo cat /lib/systemd/system/containerd.service                                                                 │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo cat /etc/containerd/config.toml                                                                            │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo containerd config dump                                                                                     │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo systemctl status crio --all --full --no-pager                                                              │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo systemctl cat crio --no-pager                                                                              │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                    │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo crio config                                                                                                │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ delete  │ -p cilium-421834                                                                                                                 │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │ 26 Sep 25 23:42 UTC │
	│ start   │ -p cert-expiration-648174 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ cert-expiration-648174 │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 23:42:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 23:42:51.576944   51192 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:42:51.577236   51192 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:42:51.577242   51192 out.go:374] Setting ErrFile to fd 2...
	I0926 23:42:51.577249   51192 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:42:51.578911   51192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	I0926 23:42:51.579395   51192 out.go:368] Setting JSON to false
	I0926 23:42:51.580288   51192 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5117,"bootTime":1758925055,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 23:42:51.580368   51192 start.go:140] virtualization: kvm guest
	I0926 23:42:51.586104   51192 out.go:179] * [cert-expiration-648174] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 23:42:51.587814   51192 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 23:42:51.587859   51192 notify.go:220] Checking for updates...
	I0926 23:42:51.590199   51192 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 23:42:51.591330   51192 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 23:42:51.592336   51192 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 23:42:51.593481   51192 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 23:42:51.594643   51192 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 23:42:51.596402   51192 config.go:182] Loaded profile config "force-systemd-env-429303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:42:51.596611   51192 config.go:182] Loaded profile config "pause-298014": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:42:51.596736   51192 config.go:182] Loaded profile config "stopped-upgrade-217447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I0926 23:42:51.596881   51192 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 23:42:51.629717   51192 out.go:179] * Using the kvm2 driver based on user configuration
	I0926 23:42:51.630817   51192 start.go:304] selected driver: kvm2
	I0926 23:42:51.630839   51192 start.go:924] validating driver "kvm2" against <nil>
	I0926 23:42:51.630861   51192 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 23:42:51.631912   51192 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 23:42:51.631997   51192 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 23:42:51.647291   51192 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 23:42:51.647315   51192 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 23:42:51.662100   51192 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 23:42:51.662133   51192 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 23:42:51.662379   51192 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0926 23:42:51.662400   51192 cni.go:84] Creating CNI manager for ""
	I0926 23:42:51.662439   51192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 23:42:51.662445   51192 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 23:42:51.662494   51192 start.go:348] cluster config:
	{Name:cert-expiration-648174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:cert-expiration-648174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 23:42:51.662574   51192 iso.go:125] acquiring lock: {Name:mk665cb8117fd96bfc46b1e5a29611848cf59d97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 23:42:51.664937   51192 out.go:179] * Starting "cert-expiration-648174" primary control-plane node in "cert-expiration-648174" cluster
	I0926 23:42:47.444965   50469 preload.go:131] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I0926 23:42:47.445016   50469 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I0926 23:42:47.445024   50469 cache.go:58] Caching tarball of preloaded images
	I0926 23:42:47.445101   50469 preload.go:172] Found /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0926 23:42:47.445111   50469 cache.go:61] Finished verifying existence of preloaded tar for v1.28.3 on crio
	I0926 23:42:47.445205   50469 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/stopped-upgrade-217447/config.json ...
	I0926 23:42:47.445461   50469 start.go:360] acquireMachinesLock for stopped-upgrade-217447: {Name:mk2abc374bcfc09d0b998f1b70bb443182c23d46 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 23:42:48.266215   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:42:48.266978   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | no network interface addresses found for domain force-systemd-env-429303 (source=lease)
	I0926 23:42:48.267010   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | trying to list again with source=arp
	I0926 23:42:48.267347   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | unable to find current IP address of domain force-systemd-env-429303 in network mk-force-systemd-env-429303 (interfaces detected: [])
	I0926 23:42:48.267403   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | I0926 23:42:48.267323   49836 retry.go:31] will retry after 493.294397ms: waiting for domain to come up
	I0926 23:42:48.762048   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:42:48.762702   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | no network interface addresses found for domain force-systemd-env-429303 (source=lease)
	I0926 23:42:48.762725   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | trying to list again with source=arp
	I0926 23:42:48.763140   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | unable to find current IP address of domain force-systemd-env-429303 in network mk-force-systemd-env-429303 (interfaces detected: [])
	I0926 23:42:48.763172   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | I0926 23:42:48.763121   49836 retry.go:31] will retry after 842.369329ms: waiting for domain to come up
	I0926 23:42:49.608053   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:42:49.608869   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | no network interface addresses found for domain force-systemd-env-429303 (source=lease)
	I0926 23:42:49.608897   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | trying to list again with source=arp
	I0926 23:42:49.609190   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | unable to find current IP address of domain force-systemd-env-429303 in network mk-force-systemd-env-429303 (interfaces detected: [])
	I0926 23:42:49.610355   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | I0926 23:42:49.610252   49836 retry.go:31] will retry after 779.366798ms: waiting for domain to come up
	I0926 23:42:50.391116   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:42:50.391799   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | no network interface addresses found for domain force-systemd-env-429303 (source=lease)
	I0926 23:42:50.391838   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | trying to list again with source=arp
	I0926 23:42:50.392166   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | unable to find current IP address of domain force-systemd-env-429303 in network mk-force-systemd-env-429303 (interfaces detected: [])
	I0926 23:42:50.392189   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | I0926 23:42:50.392152   49836 retry.go:31] will retry after 1.124715923s: waiting for domain to come up
	I0926 23:42:51.519348   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:42:51.520139   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | no network interface addresses found for domain force-systemd-env-429303 (source=lease)
	I0926 23:42:51.520180   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | trying to list again with source=arp
	I0926 23:42:51.520450   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | unable to find current IP address of domain force-systemd-env-429303 in network mk-force-systemd-env-429303 (interfaces detected: [])
	I0926 23:42:51.520478   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | I0926 23:42:51.520435   49836 retry.go:31] will retry after 1.206643322s: waiting for domain to come up
	I0926 23:42:52.729018   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:42:52.729562   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | no network interface addresses found for domain force-systemd-env-429303 (source=lease)
	I0926 23:42:52.729592   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | trying to list again with source=arp
	I0926 23:42:52.729962   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | unable to find current IP address of domain force-systemd-env-429303 in network mk-force-systemd-env-429303 (interfaces detected: [])
	I0926 23:42:52.729991   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | I0926 23:42:52.729936   49836 retry.go:31] will retry after 2.121355284s: waiting for domain to come up
	I0926 23:42:53.003817   48726 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 0eb5d37736bbcd89b18982790b72506387ebcf54266e25bad635b4756357149c 44947fdc81a0b7b0f80270fbf051c9cb1b239434ca02be9c632ac93d614f6b32 1567d2e11655dc909f2f494668cacd1951e28df8f20614765d182ca48a60ecb5 51dc69520ea563551ffb542f8acc0a9060967383c75e5f980c2b4882cd666437 645856b5f963235624cf1b074088fc311ed16a9d7a3aa3d34cb9f5291ea4d996 168bab96e50b2f889b38634530403e8a10ae45bb9fd35cff73d3214501ddcb1c 2b15910803a545e4a869d6f43ce6b53ef32a6a2034fa3c82a31296891c2caa16 4291340e3901ff6ccd5af3a77e5ec33802ebc6e8e947a6db005215772968da3c f967ebe7302f5cf4de8213a6b3a0c7a4436980ecd58e590ba1e4bd41c75d9839: (3.725512685s)
	I0926 23:42:53.003913   48726 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0926 23:42:53.042699   48726 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 23:42:53.056737   48726 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Sep 26 23:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5642 Sep 26 23:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1954 Sep 26 23:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5590 Sep 26 23:41 /etc/kubernetes/scheduler.conf
	
	I0926 23:42:53.056820   48726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 23:42:53.069626   48726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 23:42:53.082275   48726 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0926 23:42:53.082340   48726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 23:42:53.095446   48726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 23:42:53.107190   48726 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0926 23:42:53.107258   48726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 23:42:53.119696   48726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 23:42:53.136739   48726 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0926 23:42:53.136800   48726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 23:42:53.153509   48726 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 23:42:53.167200   48726 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 23:42:53.228781   48726 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 23:42:54.839652   48726 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.610824832s)
	I0926 23:42:54.839743   48726 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0926 23:42:55.130416   48726 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 23:42:55.210245   48726 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0926 23:42:55.317467   48726 api_server.go:52] waiting for apiserver process to appear ...
	I0926 23:42:55.317564   48726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:42:55.818663   48726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:42:51.666079   51192 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 23:42:51.666119   51192 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0926 23:42:51.666126   51192 cache.go:58] Caching tarball of preloaded images
	I0926 23:42:51.666216   51192 preload.go:172] Found /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0926 23:42:51.666224   51192 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0926 23:42:51.666346   51192 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/cert-expiration-648174/config.json ...
	I0926 23:42:51.666363   51192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/cert-expiration-648174/config.json: {Name:mk913b7adc40d2f2db2b9a5b2831fb8b39e6b32c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:42:51.666562   51192 start.go:360] acquireMachinesLock for cert-expiration-648174: {Name:mk2abc374bcfc09d0b998f1b70bb443182c23d46 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 23:42:54.853780   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:42:54.854533   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | no network interface addresses found for domain force-systemd-env-429303 (source=lease)
	I0926 23:42:54.854562   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | trying to list again with source=arp
	I0926 23:42:54.854966   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | unable to find current IP address of domain force-systemd-env-429303 in network mk-force-systemd-env-429303 (interfaces detected: [])
	I0926 23:42:54.855061   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | I0926 23:42:54.854979   49836 retry.go:31] will retry after 1.892985147s: waiting for domain to come up
	I0926 23:42:56.750480   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:42:56.751277   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | no network interface addresses found for domain force-systemd-env-429303 (source=lease)
	I0926 23:42:56.751310   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | trying to list again with source=arp
	I0926 23:42:56.751661   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | unable to find current IP address of domain force-systemd-env-429303 in network mk-force-systemd-env-429303 (interfaces detected: [])
	I0926 23:42:56.751701   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | I0926 23:42:56.751643   49836 retry.go:31] will retry after 3.13275826s: waiting for domain to come up
	I0926 23:42:56.317962   48726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:42:56.352738   48726 api_server.go:72] duration metric: took 1.035265714s to wait for apiserver process to appear ...
	I0926 23:42:56.352772   48726 api_server.go:88] waiting for apiserver healthz status ...
	I0926 23:42:56.352812   48726 api_server.go:253] Checking apiserver healthz at https://192.168.83.242:8443/healthz ...
	I0926 23:42:58.463027   48726 api_server.go:279] https://192.168.83.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0926 23:42:58.463065   48726 api_server.go:103] status: https://192.168.83.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0926 23:42:58.463079   48726 api_server.go:253] Checking apiserver healthz at https://192.168.83.242:8443/healthz ...
	I0926 23:42:58.587800   48726 api_server.go:279] https://192.168.83.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0926 23:42:58.587858   48726 api_server.go:103] status: https://192.168.83.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0926 23:42:58.853385   48726 api_server.go:253] Checking apiserver healthz at https://192.168.83.242:8443/healthz ...
	I0926 23:42:58.858743   48726 api_server.go:279] https://192.168.83.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0926 23:42:58.858772   48726 api_server.go:103] status: https://192.168.83.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0926 23:42:59.353597   48726 api_server.go:253] Checking apiserver healthz at https://192.168.83.242:8443/healthz ...
	I0926 23:42:59.360756   48726 api_server.go:279] https://192.168.83.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0926 23:42:59.360788   48726 api_server.go:103] status: https://192.168.83.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0926 23:42:59.853026   48726 api_server.go:253] Checking apiserver healthz at https://192.168.83.242:8443/healthz ...
	I0926 23:42:59.859472   48726 api_server.go:279] https://192.168.83.242:8443/healthz returned 200:
	ok
	I0926 23:42:59.869995   48726 api_server.go:141] control plane version: v1.34.0
	I0926 23:42:59.870029   48726 api_server.go:131] duration metric: took 3.517248976s to wait for apiserver health ...
	I0926 23:42:59.870041   48726 cni.go:84] Creating CNI manager for ""
	I0926 23:42:59.870049   48726 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 23:42:59.871696   48726 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0926 23:42:59.873432   48726 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 23:42:59.890535   48726 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 23:42:59.928900   48726 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 23:42:59.943092   48726 system_pods.go:59] 6 kube-system pods found
	I0926 23:42:59.943151   48726 system_pods.go:61] "coredns-66bc5c9577-74fdn" [930aa1d0-38cf-4e8b-8d24-e674f37f457b] Running
	I0926 23:42:59.943173   48726 system_pods.go:61] "etcd-pause-298014" [c17c7527-c91c-43e7-9235-c3adfc39cf07] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:42:59.943185   48726 system_pods.go:61] "kube-apiserver-pause-298014" [10f6cbd2-e710-4684-8f3d-c8d0600c81cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:42:59.943203   48726 system_pods.go:61] "kube-controller-manager-pause-298014" [03d5746f-7b12-4665-9dee-c8697d01ad12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:42:59.943216   48726 system_pods.go:61] "kube-proxy-2s884" [eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 23:42:59.943225   48726 system_pods.go:61] "kube-scheduler-pause-298014" [4470d689-950d-45d5-afa4-f66199a4a3b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:42:59.943232   48726 system_pods.go:74] duration metric: took 14.308017ms to wait for pod list to return data ...
	I0926 23:42:59.943247   48726 node_conditions.go:102] verifying NodePressure condition ...
	I0926 23:42:59.948141   48726 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 23:42:59.948184   48726 node_conditions.go:123] node cpu capacity is 2
	I0926 23:42:59.948200   48726 node_conditions.go:105] duration metric: took 4.94767ms to run NodePressure ...
	I0926 23:42:59.948262   48726 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 23:43:00.225956   48726 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I0926 23:43:00.230331   48726 kubeadm.go:743] kubelet initialised
	I0926 23:43:00.230354   48726 kubeadm.go:744] duration metric: took 4.373004ms waiting for restarted kubelet to initialise ...
	I0926 23:43:00.230369   48726 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 23:43:00.247899   48726 ops.go:34] apiserver oom_adj: -16
	I0926 23:43:00.247921   48726 kubeadm.go:601] duration metric: took 11.184528516s to restartPrimaryControlPlane
	I0926 23:43:00.247930   48726 kubeadm.go:402] duration metric: took 11.600654079s to StartCluster
	I0926 23:43:00.247947   48726 settings.go:142] acquiring lock: {Name:mk8a46d5a99d51096f5a73696c8b5f570ce357f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:43:00.248023   48726 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 23:43:00.248768   48726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/kubeconfig: {Name:mkc92bf76d8ba21d0a2b0bb28107401b61549063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:43:00.249048   48726 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.83.242 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 23:43:00.249252   48726 config.go:182] Loaded profile config "pause-298014": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:43:00.249176   48726 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 23:43:00.250661   48726 out.go:179] * Verifying Kubernetes components...
	I0926 23:43:00.250661   48726 out.go:179] * Enabled addons: 
	I0926 23:43:00.251760   48726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:43:00.251804   48726 addons.go:514] duration metric: took 2.647759ms for enable addons: enabled=[]
	I0926 23:43:00.440654   48726 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:43:00.466865   48726 node_ready.go:35] waiting up to 6m0s for node "pause-298014" to be "Ready" ...
	I0926 23:43:00.471036   48726 node_ready.go:49] node "pause-298014" is "Ready"
	I0926 23:43:00.471065   48726 node_ready.go:38] duration metric: took 4.161964ms for node "pause-298014" to be "Ready" ...
	I0926 23:43:00.471080   48726 api_server.go:52] waiting for apiserver process to appear ...
	I0926 23:43:00.471139   48726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:43:00.496807   48726 api_server.go:72] duration metric: took 247.718701ms to wait for apiserver process to appear ...
	I0926 23:43:00.496852   48726 api_server.go:88] waiting for apiserver healthz status ...
	I0926 23:43:00.496873   48726 api_server.go:253] Checking apiserver healthz at https://192.168.83.242:8443/healthz ...
	I0926 23:43:00.501482   48726 api_server.go:279] https://192.168.83.242:8443/healthz returned 200:
	ok
	I0926 23:43:00.502498   48726 api_server.go:141] control plane version: v1.34.0
	I0926 23:43:00.502518   48726 api_server.go:131] duration metric: took 5.659732ms to wait for apiserver health ...
	I0926 23:43:00.502527   48726 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 23:43:00.505340   48726 system_pods.go:59] 6 kube-system pods found
	I0926 23:43:00.505362   48726 system_pods.go:61] "coredns-66bc5c9577-74fdn" [930aa1d0-38cf-4e8b-8d24-e674f37f457b] Running
	I0926 23:43:00.505370   48726 system_pods.go:61] "etcd-pause-298014" [c17c7527-c91c-43e7-9235-c3adfc39cf07] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:43:00.505376   48726 system_pods.go:61] "kube-apiserver-pause-298014" [10f6cbd2-e710-4684-8f3d-c8d0600c81cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:43:00.505382   48726 system_pods.go:61] "kube-controller-manager-pause-298014" [03d5746f-7b12-4665-9dee-c8697d01ad12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:43:00.505386   48726 system_pods.go:61] "kube-proxy-2s884" [eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43] Running
	I0926 23:43:00.505391   48726 system_pods.go:61] "kube-scheduler-pause-298014" [4470d689-950d-45d5-afa4-f66199a4a3b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:43:00.505420   48726 system_pods.go:74] duration metric: took 2.887105ms to wait for pod list to return data ...
	I0926 23:43:00.505427   48726 default_sa.go:34] waiting for default service account to be created ...
	I0926 23:43:00.508349   48726 default_sa.go:45] found service account: "default"
	I0926 23:43:00.508368   48726 default_sa.go:55] duration metric: took 2.934618ms for default service account to be created ...
	I0926 23:43:00.508376   48726 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 23:43:00.511652   48726 system_pods.go:86] 6 kube-system pods found
	I0926 23:43:00.511669   48726 system_pods.go:89] "coredns-66bc5c9577-74fdn" [930aa1d0-38cf-4e8b-8d24-e674f37f457b] Running
	I0926 23:43:00.511676   48726 system_pods.go:89] "etcd-pause-298014" [c17c7527-c91c-43e7-9235-c3adfc39cf07] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:43:00.511683   48726 system_pods.go:89] "kube-apiserver-pause-298014" [10f6cbd2-e710-4684-8f3d-c8d0600c81cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:43:00.511692   48726 system_pods.go:89] "kube-controller-manager-pause-298014" [03d5746f-7b12-4665-9dee-c8697d01ad12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:43:00.511700   48726 system_pods.go:89] "kube-proxy-2s884" [eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43] Running
	I0926 23:43:00.511706   48726 system_pods.go:89] "kube-scheduler-pause-298014" [4470d689-950d-45d5-afa4-f66199a4a3b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:43:00.511713   48726 system_pods.go:126] duration metric: took 3.331612ms to wait for k8s-apps to be running ...
	I0926 23:43:00.511722   48726 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 23:43:00.511762   48726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:43:00.529608   48726 system_svc.go:56] duration metric: took 17.875479ms WaitForService to wait for kubelet
	I0926 23:43:00.529643   48726 kubeadm.go:586] duration metric: took 280.560106ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 23:43:00.529664   48726 node_conditions.go:102] verifying NodePressure condition ...
	I0926 23:43:00.533649   48726 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 23:43:00.533672   48726 node_conditions.go:123] node cpu capacity is 2
	I0926 23:43:00.533684   48726 node_conditions.go:105] duration metric: took 4.014326ms to run NodePressure ...
	I0926 23:43:00.533697   48726 start.go:241] waiting for startup goroutines ...
	I0926 23:43:00.533707   48726 start.go:246] waiting for cluster config update ...
	I0926 23:43:00.533717   48726 start.go:255] writing updated cluster config ...
	I0926 23:43:00.534083   48726 ssh_runner.go:195] Run: rm -f paused
	I0926 23:43:00.539756   48726 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:43:00.540270   48726 kapi.go:59] client config for pause-298014: &rest.Config{Host:"https://192.168.83.242:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21642-6020/.minikube/profiles/pause-298014/client.crt", KeyFile:"/home/jenkins/minikube-integration/21642-6020/.minikube/profiles/pause-298014/client.key", CAFile:"/home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 23:43:00.543908   48726 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-74fdn" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:43:00.548696   48726 pod_ready.go:94] pod "coredns-66bc5c9577-74fdn" is "Ready"
	I0926 23:43:00.548716   48726 pod_ready.go:86] duration metric: took 4.786672ms for pod "coredns-66bc5c9577-74fdn" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:43:00.550610   48726 pod_ready.go:83] waiting for pod "etcd-pause-298014" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:42:59.886160   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:42:59.886846   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | no network interface addresses found for domain force-systemd-env-429303 (source=lease)
	I0926 23:42:59.886883   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | trying to list again with source=arp
	I0926 23:42:59.887243   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | unable to find current IP address of domain force-systemd-env-429303 in network mk-force-systemd-env-429303 (interfaces detected: [])
	I0926 23:42:59.887278   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | I0926 23:42:59.887172   49836 retry.go:31] will retry after 3.300788257s: waiting for domain to come up
	I0926 23:43:04.910220   50469 start.go:364] duration metric: took 17.464721825s to acquireMachinesLock for "stopped-upgrade-217447"
	I0926 23:43:04.910295   50469 start.go:96] Skipping create...Using existing machine configuration
	I0926 23:43:04.910306   50469 fix.go:54] fixHost starting: 
	I0926 23:43:04.910737   50469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:43:04.910790   50469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:43:04.929859   50469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46811
	I0926 23:43:04.930352   50469 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:43:04.930916   50469 main.go:141] libmachine: Using API Version  1
	I0926 23:43:04.930945   50469 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:43:04.931363   50469 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:43:04.931585   50469 main.go:141] libmachine: (stopped-upgrade-217447) Calling .DriverName
	I0926 23:43:04.931756   50469 main.go:141] libmachine: (stopped-upgrade-217447) Calling .GetState
	I0926 23:43:04.933940   50469 fix.go:112] recreateIfNeeded on stopped-upgrade-217447: state=Stopped err=<nil>
	I0926 23:43:04.933999   50469 main.go:141] libmachine: (stopped-upgrade-217447) Calling .DriverName
	W0926 23:43:04.934188   50469 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 23:43:01.561195   48726 pod_ready.go:94] pod "etcd-pause-298014" is "Ready"
	I0926 23:43:01.561224   48726 pod_ready.go:86] duration metric: took 1.010597472s for pod "etcd-pause-298014" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:43:01.565507   48726 pod_ready.go:83] waiting for pod "kube-apiserver-pause-298014" in "kube-system" namespace to be "Ready" or be gone ...
	W0926 23:43:03.573013   48726 pod_ready.go:104] pod "kube-apiserver-pause-298014" is not "Ready", error: <nil>
	W0926 23:43:05.574761   48726 pod_ready.go:104] pod "kube-apiserver-pause-298014" is not "Ready", error: <nil>
	I0926 23:43:03.189365   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.190116   48840 main.go:141] libmachine: (force-systemd-env-429303) found domain IP: 192.168.39.231
	I0926 23:43:03.190147   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has current primary IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.190153   48840 main.go:141] libmachine: (force-systemd-env-429303) reserving static IP address...
	I0926 23:43:03.190578   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | unable to find host DHCP lease matching {name: "force-systemd-env-429303", mac: "52:54:00:da:63:d4", ip: "192.168.39.231"} in network mk-force-systemd-env-429303
	I0926 23:43:03.410348   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | Getting to WaitForSSH function...
	I0926 23:43:03.410376   48840 main.go:141] libmachine: (force-systemd-env-429303) reserved static IP address 192.168.39.231 for domain force-systemd-env-429303
	I0926 23:43:03.410391   48840 main.go:141] libmachine: (force-systemd-env-429303) waiting for SSH...
	I0926 23:43:03.413496   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.414106   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:minikube Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:03.414144   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.414283   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | Using SSH client type: external
	I0926 23:43:03.414329   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | Using SSH private key: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/force-systemd-env-429303/id_rsa (-rw-------)
	I0926 23:43:03.414376   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21642-6020/.minikube/machines/force-systemd-env-429303/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0926 23:43:03.414394   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | About to run SSH command:
	I0926 23:43:03.414410   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | exit 0
	I0926 23:43:03.549698   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | SSH cmd err, output: <nil>: 
	I0926 23:43:03.550069   48840 main.go:141] libmachine: (force-systemd-env-429303) domain creation complete
	I0926 23:43:03.550401   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetConfigRaw
	I0926 23:43:03.551049   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .DriverName
	I0926 23:43:03.551260   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .DriverName
	I0926 23:43:03.551388   48840 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0926 23:43:03.551403   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetState
	I0926 23:43:03.553014   48840 main.go:141] libmachine: Detecting operating system of created instance...
	I0926 23:43:03.553032   48840 main.go:141] libmachine: Waiting for SSH to be available...
	I0926 23:43:03.553040   48840 main.go:141] libmachine: Getting to WaitForSSH function...
	I0926 23:43:03.553048   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:03.556420   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.556877   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:03.556905   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.557097   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHPort
	I0926 23:43:03.557278   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:03.557396   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:03.557515   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHUsername
	I0926 23:43:03.557701   48840 main.go:141] libmachine: Using SSH client type: native
	I0926 23:43:03.557977   48840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0926 23:43:03.557989   48840 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0926 23:43:03.675033   48840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 23:43:03.675073   48840 main.go:141] libmachine: Detecting the provisioner...
	I0926 23:43:03.675083   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:03.678605   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.679083   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:03.679122   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.679252   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHPort
	I0926 23:43:03.679453   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:03.679594   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:03.679698   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHUsername
	I0926 23:43:03.679849   48840 main.go:141] libmachine: Using SSH client type: native
	I0926 23:43:03.680126   48840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0926 23:43:03.680141   48840 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0926 23:43:03.791667   48840 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0926 23:43:03.791747   48840 main.go:141] libmachine: found compatible host: buildroot
	I0926 23:43:03.791756   48840 main.go:141] libmachine: Provisioning with buildroot...
	I0926 23:43:03.791764   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetMachineName
	I0926 23:43:03.792067   48840 buildroot.go:166] provisioning hostname "force-systemd-env-429303"
	I0926 23:43:03.792094   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetMachineName
	I0926 23:43:03.792312   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:03.795468   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.795914   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:03.795952   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.796124   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHPort
	I0926 23:43:03.796315   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:03.796493   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:03.796671   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHUsername
	I0926 23:43:03.796875   48840 main.go:141] libmachine: Using SSH client type: native
	I0926 23:43:03.797077   48840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0926 23:43:03.797089   48840 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-429303 && echo "force-systemd-env-429303" | sudo tee /etc/hostname
	I0926 23:43:03.934728   48840 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-429303
	
	I0926 23:43:03.934762   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:03.938173   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.938568   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:03.938603   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.938792   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHPort
	I0926 23:43:03.939047   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:03.939229   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:03.939331   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHUsername
	I0926 23:43:03.939477   48840 main.go:141] libmachine: Using SSH client type: native
	I0926 23:43:03.939712   48840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0926 23:43:03.939731   48840 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-429303' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-429303/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-429303' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 23:43:04.063874   48840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 23:43:04.063907   48840 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21642-6020/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-6020/.minikube}
	I0926 23:43:04.063978   48840 buildroot.go:174] setting up certificates
	I0926 23:43:04.063993   48840 provision.go:84] configureAuth start
	I0926 23:43:04.064013   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetMachineName
	I0926 23:43:04.064361   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetIP
	I0926 23:43:04.067749   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.068342   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.068373   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.068538   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:04.072714   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.073222   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.073255   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.073417   48840 provision.go:143] copyHostCerts
	I0926 23:43:04.073455   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem
	I0926 23:43:04.073484   48840 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem, removing ...
	I0926 23:43:04.073498   48840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem
	I0926 23:43:04.073563   48840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem (1082 bytes)
	I0926 23:43:04.073641   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem
	I0926 23:43:04.073659   48840 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem, removing ...
	I0926 23:43:04.073665   48840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem
	I0926 23:43:04.073694   48840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem (1123 bytes)
	I0926 23:43:04.073759   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem
	I0926 23:43:04.073787   48840 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem, removing ...
	I0926 23:43:04.073797   48840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem
	I0926 23:43:04.073862   48840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem (1675 bytes)
	I0926 23:43:04.073927   48840 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-429303 san=[127.0.0.1 192.168.39.231 force-systemd-env-429303 localhost minikube]
	I0926 23:43:04.175697   48840 provision.go:177] copyRemoteCerts
	I0926 23:43:04.175753   48840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 23:43:04.175775   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:04.178943   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.179344   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.179385   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.179578   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHPort
	I0926 23:43:04.179800   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:04.179971   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHUsername
	I0926 23:43:04.180127   48840 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/force-systemd-env-429303/id_rsa Username:docker}
	I0926 23:43:04.266933   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 23:43:04.267025   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 23:43:04.299365   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 23:43:04.299469   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0926 23:43:04.330135   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 23:43:04.330208   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 23:43:04.361981   48840 provision.go:87] duration metric: took 297.969684ms to configureAuth
	I0926 23:43:04.362019   48840 buildroot.go:189] setting minikube options for container-runtime
	I0926 23:43:04.362187   48840 config.go:182] Loaded profile config "force-systemd-env-429303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:43:04.362277   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:04.365427   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.365762   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.365786   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.366023   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHPort
	I0926 23:43:04.366247   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:04.366430   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:04.366592   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHUsername
	I0926 23:43:04.366757   48840 main.go:141] libmachine: Using SSH client type: native
	I0926 23:43:04.367000   48840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0926 23:43:04.367016   48840 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0926 23:43:04.638676   48840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0926 23:43:04.638713   48840 main.go:141] libmachine: Checking connection to Docker...
	I0926 23:43:04.638725   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetURL
	I0926 23:43:04.640227   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | using libvirt version 8000000
	I0926 23:43:04.642959   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.643395   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.643431   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.643659   48840 main.go:141] libmachine: Docker is up and running!
	I0926 23:43:04.643678   48840 main.go:141] libmachine: Reticulating splines...
	I0926 23:43:04.643686   48840 client.go:171] duration metric: took 20.326455404s to LocalClient.Create
	I0926 23:43:04.643709   48840 start.go:167] duration metric: took 20.326557837s to libmachine.API.Create "force-systemd-env-429303"
	I0926 23:43:04.643719   48840 start.go:293] postStartSetup for "force-systemd-env-429303" (driver="kvm2")
	I0926 23:43:04.643728   48840 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 23:43:04.643744   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .DriverName
	I0926 23:43:04.644062   48840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 23:43:04.644090   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:04.646895   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.647324   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.647358   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.647544   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHPort
	I0926 23:43:04.647768   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:04.647970   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHUsername
	I0926 23:43:04.648124   48840 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/force-systemd-env-429303/id_rsa Username:docker}
	I0926 23:43:04.737457   48840 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 23:43:04.742895   48840 info.go:137] Remote host: Buildroot 2025.02
	I0926 23:43:04.742926   48840 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-6020/.minikube/addons for local assets ...
	I0926 23:43:04.743005   48840 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-6020/.minikube/files for local assets ...
	I0926 23:43:04.743114   48840 filesync.go:149] local asset: /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem -> 99142.pem in /etc/ssl/certs
	I0926 23:43:04.743126   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem -> /etc/ssl/certs/99142.pem
	I0926 23:43:04.743230   48840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 23:43:04.756887   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem --> /etc/ssl/certs/99142.pem (1708 bytes)
	I0926 23:43:04.788999   48840 start.go:296] duration metric: took 145.266169ms for postStartSetup
	I0926 23:43:04.789057   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetConfigRaw
	I0926 23:43:04.789714   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetIP
	I0926 23:43:04.792736   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.793181   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.793231   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.793659   48840 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/config.json ...
	I0926 23:43:04.793946   48840 start.go:128] duration metric: took 20.50096342s to createHost
	I0926 23:43:04.793977   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:04.796522   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.796952   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.796995   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.797199   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHPort
	I0926 23:43:04.797397   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:04.797593   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:04.797728   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHUsername
	I0926 23:43:04.797880   48840 main.go:141] libmachine: Using SSH client type: native
	I0926 23:43:04.798150   48840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0926 23:43:04.798163   48840 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 23:43:04.909961   48840 main.go:141] libmachine: SSH cmd err, output: <nil>: 1758930184.871753731
	
	I0926 23:43:04.909987   48840 fix.go:216] guest clock: 1758930184.871753731
	I0926 23:43:04.909994   48840 fix.go:229] Guest: 2025-09-26 23:43:04.871753731 +0000 UTC Remote: 2025-09-26 23:43:04.793961367 +0000 UTC m=+31.892863571 (delta=77.792364ms)
	I0926 23:43:04.910035   48840 fix.go:200] guest clock delta is within tolerance: 77.792364ms
	I0926 23:43:04.910046   48840 start.go:83] releasing machines lock for "force-systemd-env-429303", held for 20.61733579s
	I0926 23:43:04.910077   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .DriverName
	I0926 23:43:04.910394   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetIP
	I0926 23:43:04.914147   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.914605   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.914643   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.914821   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .DriverName
	I0926 23:43:04.915341   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .DriverName
	I0926 23:43:04.915553   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .DriverName
	I0926 23:43:04.915651   48840 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 23:43:04.915699   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:04.915736   48840 ssh_runner.go:195] Run: cat /version.json
	I0926 23:43:04.915755   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:04.919592   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.919859   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.920052   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.920076   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.920312   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHPort
	I0926 23:43:04.920497   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.920527   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.920533   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:04.920708   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHPort
	I0926 23:43:04.920787   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHUsername
	I0926 23:43:04.920939   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:04.921116   48840 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/force-systemd-env-429303/id_rsa Username:docker}
	I0926 23:43:04.921126   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHUsername
	I0926 23:43:04.921295   48840 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/force-systemd-env-429303/id_rsa Username:docker}
	I0926 23:43:05.028442   48840 ssh_runner.go:195] Run: systemctl --version
	I0926 23:43:05.035789   48840 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0926 23:43:05.196186   48840 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 23:43:05.203966   48840 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 23:43:05.204059   48840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 23:43:05.226700   48840 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 23:43:05.226724   48840 start.go:495] detecting cgroup driver to use...
	I0926 23:43:05.226741   48840 start.go:499] using "systemd" cgroup driver as enforced via flags
	I0926 23:43:05.226817   48840 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 23:43:05.249131   48840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 23:43:05.268336   48840 docker.go:218] disabling cri-docker service (if available) ...
	I0926 23:43:05.268393   48840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0926 23:43:05.288488   48840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0926 23:43:05.308299   48840 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0926 23:43:05.482350   48840 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0926 23:43:05.698507   48840 docker.go:234] disabling docker service ...
	I0926 23:43:05.698607   48840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0926 23:43:05.718099   48840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0926 23:43:05.734927   48840 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0926 23:43:05.919711   48840 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0926 23:43:06.075152   48840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 23:43:06.096420   48840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 23:43:06.122184   48840 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0926 23:43:06.122254   48840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:43:06.136979   48840 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0926 23:43:06.137057   48840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:43:06.151790   48840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:43:06.165782   48840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:43:06.179764   48840 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 23:43:06.194516   48840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:43:06.208942   48840 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:43:06.233662   48840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:43:06.248701   48840 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 23:43:06.260787   48840 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 23:43:06.260860   48840 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 23:43:06.293269   48840 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 23:43:06.307707   48840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:43:06.484359   48840 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0926 23:43:06.614334   48840 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0926 23:43:06.614436   48840 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0926 23:43:06.620669   48840 start.go:563] Will wait 60s for crictl version
	I0926 23:43:06.620729   48840 ssh_runner.go:195] Run: which crictl
	I0926 23:43:06.625742   48840 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 23:43:06.674208   48840 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0926 23:43:06.674304   48840 ssh_runner.go:195] Run: crio --version
	I0926 23:43:06.717417   48840 ssh_runner.go:195] Run: crio --version
	I0926 23:43:06.753543   48840 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0926 23:43:04.936120   50469 out.go:252] * Restarting existing kvm2 VM for "stopped-upgrade-217447" ...
	I0926 23:43:04.936162   50469 main.go:141] libmachine: (stopped-upgrade-217447) Calling .Start
	I0926 23:43:04.936345   50469 main.go:141] libmachine: (stopped-upgrade-217447) starting domain...
	I0926 23:43:04.936369   50469 main.go:141] libmachine: (stopped-upgrade-217447) ensuring networks are active...
	I0926 23:43:04.937244   50469 main.go:141] libmachine: (stopped-upgrade-217447) Ensuring network default is active
	I0926 23:43:04.937717   50469 main.go:141] libmachine: (stopped-upgrade-217447) Ensuring network mk-stopped-upgrade-217447 is active
	I0926 23:43:04.938223   50469 main.go:141] libmachine: (stopped-upgrade-217447) getting domain XML...
	I0926 23:43:04.939419   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | starting domain XML:
	I0926 23:43:04.939441   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | <domain type='kvm'>
	I0926 23:43:04.939463   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <name>stopped-upgrade-217447</name>
	I0926 23:43:04.939478   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <uuid>00d11e93-9dcc-4733-9dcb-a852ca715ee7</uuid>
	I0926 23:43:04.939499   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <memory unit='KiB'>3145728</memory>
	I0926 23:43:04.939508   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I0926 23:43:04.939516   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <vcpu placement='static'>2</vcpu>
	I0926 23:43:04.939549   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <os>
	I0926 23:43:04.939592   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0926 23:43:04.939619   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <boot dev='cdrom'/>
	I0926 23:43:04.939641   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <boot dev='hd'/>
	I0926 23:43:04.939656   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <bootmenu enable='no'/>
	I0926 23:43:04.939665   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   </os>
	I0926 23:43:04.939679   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <features>
	I0926 23:43:04.939691   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <acpi/>
	I0926 23:43:04.939699   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <apic/>
	I0926 23:43:04.939710   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <pae/>
	I0926 23:43:04.939717   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   </features>
	I0926 23:43:04.939731   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0926 23:43:04.939741   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <clock offset='utc'/>
	I0926 23:43:04.939751   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <on_poweroff>destroy</on_poweroff>
	I0926 23:43:04.939769   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <on_reboot>restart</on_reboot>
	I0926 23:43:04.939790   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <on_crash>destroy</on_crash>
	I0926 23:43:04.939803   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <devices>
	I0926 23:43:04.939844   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0926 23:43:04.939857   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <disk type='file' device='cdrom'>
	I0926 23:43:04.939877   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <driver name='qemu' type='raw'/>
	I0926 23:43:04.939896   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/stopped-upgrade-217447/boot2docker.iso'/>
	I0926 23:43:04.939904   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <target dev='hdc' bus='scsi'/>
	I0926 23:43:04.939911   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <readonly/>
	I0926 23:43:04.939928   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0926 23:43:04.939937   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     </disk>
	I0926 23:43:04.939945   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <disk type='file' device='disk'>
	I0926 23:43:04.939957   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0926 23:43:04.939973   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/stopped-upgrade-217447/stopped-upgrade-217447.rawdisk'/>
	I0926 23:43:04.939984   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <target dev='hda' bus='virtio'/>
	I0926 23:43:04.939999   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0926 23:43:04.940009   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     </disk>
	I0926 23:43:04.940053   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0926 23:43:04.940084   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0926 23:43:04.940098   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     </controller>
	I0926 23:43:04.940111   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0926 23:43:04.940124   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0926 23:43:04.940134   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0926 23:43:04.940143   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     </controller>
	I0926 23:43:04.940156   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <interface type='network'>
	I0926 23:43:04.940167   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <mac address='52:54:00:b4:98:22'/>
	I0926 23:43:04.940178   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <source network='mk-stopped-upgrade-217447'/>
	I0926 23:43:04.940189   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <model type='virtio'/>
	I0926 23:43:04.940202   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0926 23:43:04.940217   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     </interface>
	I0926 23:43:04.940227   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <interface type='network'>
	I0926 23:43:04.940242   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <mac address='52:54:00:a3:44:26'/>
	I0926 23:43:04.940254   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <source network='default'/>
	I0926 23:43:04.940263   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <model type='virtio'/>
	I0926 23:43:04.940278   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0926 23:43:04.940291   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     </interface>
	I0926 23:43:04.940301   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <serial type='pty'>
	I0926 23:43:04.940320   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <target type='isa-serial' port='0'>
	I0926 23:43:04.940330   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |         <model name='isa-serial'/>
	I0926 23:43:04.940339   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       </target>
	I0926 23:43:04.940347   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     </serial>
	I0926 23:43:04.940358   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <console type='pty'>
	I0926 23:43:04.940370   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <target type='serial' port='0'/>
	I0926 23:43:04.940380   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     </console>
	I0926 23:43:04.940390   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <input type='mouse' bus='ps2'/>
	I0926 23:43:04.940397   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <input type='keyboard' bus='ps2'/>
	I0926 23:43:04.940419   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <audio id='1' type='none'/>
	I0926 23:43:04.940439   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <memballoon model='virtio'>
	I0926 23:43:04.940455   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0926 23:43:04.940465   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     </memballoon>
	I0926 23:43:04.940488   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <rng model='virtio'>
	I0926 23:43:04.940505   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <backend model='random'>/dev/random</backend>
	I0926 23:43:04.940517   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0926 23:43:04.940526   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     </rng>
	I0926 23:43:04.940535   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   </devices>
	I0926 23:43:04.940545   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | </domain>
	I0926 23:43:04.940556   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | 
	I0926 23:43:06.406079   50469 main.go:141] libmachine: (stopped-upgrade-217447) waiting for domain to start...
	I0926 23:43:06.407760   50469 main.go:141] libmachine: (stopped-upgrade-217447) domain is now running
	I0926 23:43:06.407786   50469 main.go:141] libmachine: (stopped-upgrade-217447) waiting for IP...
	I0926 23:43:06.408879   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | domain stopped-upgrade-217447 has defined MAC address 52:54:00:b4:98:22 in network mk-stopped-upgrade-217447
	I0926 23:43:06.409440   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | domain stopped-upgrade-217447 has current primary IP address 192.168.61.82 and MAC address 52:54:00:b4:98:22 in network mk-stopped-upgrade-217447
	I0926 23:43:06.409475   50469 main.go:141] libmachine: (stopped-upgrade-217447) found domain IP: 192.168.61.82
	I0926 23:43:06.409508   50469 main.go:141] libmachine: (stopped-upgrade-217447) reserving static IP address...
	I0926 23:43:06.409957   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | found host DHCP lease matching {name: "stopped-upgrade-217447", mac: "52:54:00:b4:98:22", ip: "192.168.61.82"} in network mk-stopped-upgrade-217447: {Iface:virbr4 ExpiryTime:2025-09-27 00:42:17 +0000 UTC Type:0 Mac:52:54:00:b4:98:22 Iaid: IPaddr:192.168.61.82 Prefix:24 Hostname:stopped-upgrade-217447 Clientid:01:52:54:00:b4:98:22}
	I0926 23:43:06.409990   50469 main.go:141] libmachine: (stopped-upgrade-217447) reserved static IP address 192.168.61.82 for domain stopped-upgrade-217447
	I0926 23:43:06.410013   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | skip adding static IP to network mk-stopped-upgrade-217447 - found existing host DHCP lease matching {name: "stopped-upgrade-217447", mac: "52:54:00:b4:98:22", ip: "192.168.61.82"}
	I0926 23:43:06.410033   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | Getting to WaitForSSH function...
	I0926 23:43:06.410074   50469 main.go:141] libmachine: (stopped-upgrade-217447) waiting for SSH...
	I0926 23:43:06.412583   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | domain stopped-upgrade-217447 has defined MAC address 52:54:00:b4:98:22 in network mk-stopped-upgrade-217447
	I0926 23:43:06.413006   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:98:22", ip: ""} in network mk-stopped-upgrade-217447: {Iface:virbr4 ExpiryTime:2025-09-27 00:42:17 +0000 UTC Type:0 Mac:52:54:00:b4:98:22 Iaid: IPaddr:192.168.61.82 Prefix:24 Hostname:stopped-upgrade-217447 Clientid:01:52:54:00:b4:98:22}
	I0926 23:43:06.413046   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | domain stopped-upgrade-217447 has defined IP address 192.168.61.82 and MAC address 52:54:00:b4:98:22 in network mk-stopped-upgrade-217447
	I0926 23:43:06.413227   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | Using SSH client type: external
	I0926 23:43:06.413262   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | Using SSH private key: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/stopped-upgrade-217447/id_rsa (-rw-------)
	I0926 23:43:06.413297   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21642-6020/.minikube/machines/stopped-upgrade-217447/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0926 23:43:06.413310   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | About to run SSH command:
	I0926 23:43:06.413327   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | exit 0
	I0926 23:43:06.754941   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetIP
	I0926 23:43:06.758802   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:06.759298   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:06.759329   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:06.759731   48840 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0926 23:43:06.765271   48840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 23:43:06.786414   48840 kubeadm.go:883] updating cluster {Name:force-systemd-env-429303 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName
:force-systemd-env-429303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 23:43:06.786558   48840 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 23:43:06.786637   48840 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:43:06.838127   48840 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0926 23:43:06.838217   48840 ssh_runner.go:195] Run: which lz4
	I0926 23:43:06.844709   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0926 23:43:06.844815   48840 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0926 23:43:06.850748   48840 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0926 23:43:06.850790   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	W0926 23:43:08.074177   48726 pod_ready.go:104] pod "kube-apiserver-pause-298014" is not "Ready", error: <nil>
	W0926 23:43:10.075004   48726 pod_ready.go:104] pod "kube-apiserver-pause-298014" is not "Ready", error: <nil>
	I0926 23:43:10.573229   48726 pod_ready.go:94] pod "kube-apiserver-pause-298014" is "Ready"
	I0926 23:43:10.573262   48726 pod_ready.go:86] duration metric: took 9.007726914s for pod "kube-apiserver-pause-298014" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:43:10.576521   48726 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-298014" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:43:08.726992   48840 crio.go:462] duration metric: took 1.882207333s to copy over tarball
	I0926 23:43:08.727099   48840 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0926 23:43:10.477914   48840 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.750773882s)
	I0926 23:43:10.477970   48840 crio.go:469] duration metric: took 1.750916175s to extract the tarball
	I0926 23:43:10.477981   48840 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0926 23:43:10.525930   48840 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:43:10.575011   48840 crio.go:514] all images are preloaded for cri-o runtime.
	I0926 23:43:10.575032   48840 cache_images.go:85] Images are preloaded, skipping loading
	I0926 23:43:10.575040   48840 kubeadm.go:934] updating node { 192.168.39.231 8443 v1.34.0 crio true true} ...
	I0926 23:43:10.575145   48840 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-429303 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:force-systemd-env-429303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 23:43:10.575221   48840 ssh_runner.go:195] Run: crio config
	I0926 23:43:10.632023   48840 cni.go:84] Creating CNI manager for ""
	I0926 23:43:10.632049   48840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 23:43:10.632069   48840 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 23:43:10.632097   48840 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.231 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-429303 NodeName:force-systemd-env-429303 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.231 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 23:43:10.632307   48840 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-429303"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.231"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.231"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 23:43:10.632381   48840 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0926 23:43:10.648055   48840 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 23:43:10.648141   48840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 23:43:10.663715   48840 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0926 23:43:10.690972   48840 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 23:43:10.717106   48840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I0926 23:43:10.752087   48840 ssh_runner.go:195] Run: grep 192.168.39.231	control-plane.minikube.internal$ /etc/hosts
	I0926 23:43:10.756882   48840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.231	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 23:43:10.775153   48840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:43:10.920945   48840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:43:10.942055   48840 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303 for IP: 192.168.39.231
	I0926 23:43:10.942083   48840 certs.go:195] generating shared ca certs ...
	I0926 23:43:10.942103   48840 certs.go:227] acquiring lock for ca certs: {Name:mk9e164f84dd227cf84a459eec91beae2bb75a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:43:10.942287   48840 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key
	I0926 23:43:10.942357   48840 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key
	I0926 23:43:10.942373   48840 certs.go:257] generating profile certs ...
	I0926 23:43:10.942470   48840 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/client.key
	I0926 23:43:10.942493   48840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/client.crt with IP's: []
	I0926 23:43:11.281065   48840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/client.crt ...
	I0926 23:43:11.281095   48840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/client.crt: {Name:mkde6d31cac26c55d88ad9c54eb2eb8be9c111cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:43:11.281260   48840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/client.key ...
	I0926 23:43:11.281273   48840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/client.key: {Name:mkf12a393dbb914d629bca27601d32e142c49271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:43:11.281363   48840 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.key.bda3842d
	I0926 23:43:11.281380   48840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.crt.bda3842d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.231]
	I0926 23:43:11.603425   48840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.crt.bda3842d ...
	I0926 23:43:11.603456   48840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.crt.bda3842d: {Name:mk67fd2898e3dcb39466e4e0060b8bc203034709 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:43:11.603662   48840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.key.bda3842d ...
	I0926 23:43:11.603683   48840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.key.bda3842d: {Name:mkbeb17ba68d0b7c83f919216d84bdcf58042d4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:43:11.603819   48840 certs.go:382] copying /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.crt.bda3842d -> /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.crt
	I0926 23:43:11.603951   48840 certs.go:386] copying /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.key.bda3842d -> /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.key
	I0926 23:43:11.604038   48840 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/proxy-client.key
	I0926 23:43:11.604060   48840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/proxy-client.crt with IP's: []
	I0926 23:43:12.051358   48840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/proxy-client.crt ...
	I0926 23:43:12.051390   48840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/proxy-client.crt: {Name:mk85de6de2701a822e468e4b010d87cc631396d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:43:12.051604   48840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/proxy-client.key ...
	I0926 23:43:12.051633   48840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/proxy-client.key: {Name:mk14b40c7a773659ebc4c7a4f66c7a4056eaaf9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:43:12.051750   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0926 23:43:12.051774   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0926 23:43:12.051792   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0926 23:43:12.051810   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0926 23:43:12.051843   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0926 23:43:12.051869   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0926 23:43:12.051891   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0926 23:43:12.051905   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0926 23:43:12.051987   48840 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914.pem (1338 bytes)
	W0926 23:43:12.052026   48840 certs.go:480] ignoring /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914_empty.pem, impossibly tiny 0 bytes
	I0926 23:43:12.052033   48840 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem (1679 bytes)
	I0926 23:43:12.052056   48840 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem (1082 bytes)
	I0926 23:43:12.052079   48840 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem (1123 bytes)
	I0926 23:43:12.052103   48840 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem (1675 bytes)
	I0926 23:43:12.052139   48840 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem (1708 bytes)
	I0926 23:43:12.052164   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem -> /usr/share/ca-certificates/99142.pem
	I0926 23:43:12.052178   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:43:12.052190   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914.pem -> /usr/share/ca-certificates/9914.pem
	I0926 23:43:12.052730   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 23:43:12.089540   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0926 23:43:12.125737   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 23:43:12.161325   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 23:43:12.196071   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0926 23:43:12.231304   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0926 23:43:12.264027   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 23:43:12.301088   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 23:43:12.333473   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem --> /usr/share/ca-certificates/99142.pem (1708 bytes)
	I0926 23:43:12.367223   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 23:43:12.398596   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914.pem --> /usr/share/ca-certificates/9914.pem (1338 bytes)
	I0926 23:43:12.430079   48840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 23:43:12.453473   48840 ssh_runner.go:195] Run: openssl version
	I0926 23:43:12.461289   48840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99142.pem && ln -fs /usr/share/ca-certificates/99142.pem /etc/ssl/certs/99142.pem"
	I0926 23:43:12.483331   48840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99142.pem
	I0926 23:43:12.491960   48840 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 26 22:43 /usr/share/ca-certificates/99142.pem
	I0926 23:43:12.492039   48840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99142.pem
	I0926 23:43:12.500509   48840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99142.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 23:43:12.516773   48840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 23:43:12.532142   48840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:43:12.538355   48840 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:43:12.538417   48840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:43:12.547264   48840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 23:43:12.566940   48840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9914.pem && ln -fs /usr/share/ca-certificates/9914.pem /etc/ssl/certs/9914.pem"
	I0926 23:43:12.585141   48840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9914.pem
	I0926 23:43:12.592905   48840 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 26 22:43 /usr/share/ca-certificates/9914.pem
	I0926 23:43:12.592979   48840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9914.pem
	I0926 23:43:12.602189   48840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9914.pem /etc/ssl/certs/51391683.0"
	I0926 23:43:12.619074   48840 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 23:43:12.625727   48840 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 23:43:12.625790   48840 kubeadm.go:400] StartCluster: {Name:force-systemd-env-429303 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:fo
rce-systemd-env-429303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 23:43:12.625921   48840 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0926 23:43:12.626022   48840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0926 23:43:12.674986   48840 cri.go:89] found id: ""
	I0926 23:43:12.675076   48840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 23:43:12.694000   48840 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 23:43:12.708108   48840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 23:43:12.724581   48840 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 23:43:12.724610   48840 kubeadm.go:157] found existing configuration files:
	
	I0926 23:43:12.724665   48840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 23:43:12.737584   48840 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 23:43:12.737652   48840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 23:43:12.752442   48840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 23:43:12.768746   48840 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 23:43:12.768817   48840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 23:43:12.783895   48840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 23:43:12.796567   48840 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 23:43:12.796650   48840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 23:43:12.816818   48840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 23:43:12.830524   48840 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 23:43:12.830607   48840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 23:43:12.843697   48840 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	W0926 23:43:12.585866   48726 pod_ready.go:104] pod "kube-controller-manager-pause-298014" is not "Ready", error: <nil>
	I0926 23:43:14.583781   48726 pod_ready.go:94] pod "kube-controller-manager-pause-298014" is "Ready"
	I0926 23:43:14.583814   48726 pod_ready.go:86] duration metric: took 4.007259459s for pod "kube-controller-manager-pause-298014" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:43:14.587530   48726 pod_ready.go:83] waiting for pod "kube-proxy-2s884" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:43:14.595420   48726 pod_ready.go:94] pod "kube-proxy-2s884" is "Ready"
	I0926 23:43:14.595443   48726 pod_ready.go:86] duration metric: took 7.882168ms for pod "kube-proxy-2s884" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:43:14.598843   48726 pod_ready.go:83] waiting for pod "kube-scheduler-pause-298014" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:43:14.605372   48726 pod_ready.go:94] pod "kube-scheduler-pause-298014" is "Ready"
	I0926 23:43:14.605408   48726 pod_ready.go:86] duration metric: took 6.538554ms for pod "kube-scheduler-pause-298014" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:43:14.605423   48726 pod_ready.go:40] duration metric: took 14.065639033s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:43:14.664762   48726 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 23:43:14.666706   48726 out.go:179] * Done! kubectl is now configured to use "pause-298014" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.524144239Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:28be4bf6eed2d2025aabc479240b8901e2ce0b868efd70bdc9bfb31d2661c04c,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-74fdn,Uid:930aa1d0-38cf-4e8b-8d24-e674f37f457b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1758930168011267187,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-74fdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930aa1d0-38cf-4e8b-8d24-e674f37f457b,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T23:41:51.710729438Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0360178764e4aa305c53c0f01568555f409eb4344fe4f1b29d7084d36715d9b0,Metadata:&PodSandboxMetadata{Name:etcd-pause-298014,Uid:d7a45e85e56140d091175e85ada12059,Namespace:kube-system,Attempt:1,
},State:SANDBOX_READY,CreatedAt:1758930167954788235,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a45e85e56140d091175e85ada12059,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.242:2379,kubernetes.io/config.hash: d7a45e85e56140d091175e85ada12059,kubernetes.io/config.seen: 2025-09-26T23:41:46.159123680Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:25ba2c7c4ad25a5eca792bf48599310d1d34fd1e723563774f0063b34dfb8893,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-298014,Uid:b67f6cccd1e1255753574888d6b0323d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1758930167850141702,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: b67f6cccd1e1255753574888d6b0323d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b67f6cccd1e1255753574888d6b0323d,kubernetes.io/config.seen: 2025-09-26T23:41:46.159131338Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:328cd1b818aa760801a7f30787c63006341f23430ad6c18a6bc5fad6d47127a2,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-298014,Uid:15f67e4decc7a054ae1e94a2b570f4fc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1758930167805775936,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f67e4decc7a054ae1e94a2b570f4fc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.242:8443,kubernetes.io/config.hash: 15f67e4decc7a054ae1e94a2b570f4fc,kubernetes.io/config.seen: 2025-09-26T23:41:46.159128714Z,kuberne
tes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c50d1f9da7946adf9433685602ae2de060ecd12aab963fdd8882343b06e29719,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-298014,Uid:2c237e844ef8ee507d46fbb3a8e46be9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1758930167693970422,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c237e844ef8ee507d46fbb3a8e46be9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2c237e844ef8ee507d46fbb3a8e46be9,kubernetes.io/config.seen: 2025-09-26T23:41:46.159130097Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6838664d8ef4cf86d43524b0dee7eb55bb1570912ebe69c93b79ee1323948460,Metadata:&PodSandboxMetadata{Name:kube-proxy-2s884,Uid:eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,Cre
atedAt:1758930167692926624,Labels:map[string]string{controller-revision-hash: 6f475c7966,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-2s884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T23:41:51.341790227Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3de365eb9d21bee6f177fec83f72efc709917a5a0ff5ef4847f271796564e572,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-74fdn,Uid:930aa1d0-38cf-4e8b-8d24-e674f37f457b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1758930112077738883,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-74fdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930aa1d0-38cf-4e8b-8d24-e674f37f457b,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io
/config.seen: 2025-09-26T23:41:51.710729438Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab5c7d9eca6af19583f00a6dfd91c8f535317230723f89821c8eabf23c87a888,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-bsftw,Uid:58a72bfb-4e12-48ff-a456-816da73f63c9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1758930112009425422,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-bsftw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58a72bfb-4e12-48ff-a456-816da73f63c9,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-26T23:41:51.675441925Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4c31f34cff33baa7ff084ca213c09415014ab78d37782971e247f37439db6534,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-298014,Uid:b67f6cccd1e1255753574888d6b0323d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1758930098162
222986,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67f6cccd1e1255753574888d6b0323d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b67f6cccd1e1255753574888d6b0323d,kubernetes.io/config.seen: 2025-09-26T23:41:37.565171711Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0903619f0bfa10f47049d7f4c8e5204a5e506aa1d1b01ca985c8fde7e4350ef7,Metadata:&PodSandboxMetadata{Name:etcd-pause-298014,Uid:d7a45e85e56140d091175e85ada12059,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1758930098149006910,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a45e85e56140d091175e85ada12059,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-
client-urls: https://192.168.83.242:2379,kubernetes.io/config.hash: d7a45e85e56140d091175e85ada12059,kubernetes.io/config.seen: 2025-09-26T23:41:37.565151283Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=71692005-f6e4-4ba4-8bab-89558987e2fc name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.526516024Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bcbf6219-7147-4075-92d0-44ffe63b1fc4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.526622935Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bcbf6219-7147-4075-92d0-44ffe63b1fc4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.526981520Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d68eba28720931f9bea435ef92878338e4044eb5b887c90cddd259d211ac054c,PodSandboxId:6838664d8ef4cf86d43524b0dee7eb55bb1570912ebe69c93b79ee1323948460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758930179539264204,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2s884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67bc6f73e4cc6a7047607766c90a393303e75e9841c7ad5f4a29a09e5b17ac9e,PodSandboxId:c50d1f9da7946adf9433685602ae2de060ecd12aab963fdd8882343b06e29719,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758930175731757529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c237e844ef8ee507d46fbb3a8e46be9,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257
,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c4b0724f9fc00bc656ba9261b9265a46b53536eed5e7fdc475aa37bd2a71193,PodSandboxId:328cd1b818aa760801a7f30787c63006341f23430ad6c18a6bc5fad6d47127a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758930175721066659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f67e4decc7a054ae1e94a2b570f4fc,},Annotations:map[string]string{io.kubern
etes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c30e76a20c21a36997ee2442af351e90271cd2bca90ad6e7e5235fa4b9490e,PodSandboxId:28be4bf6eed2d2025aabc479240b8901e2ce0b868efd70bdc9bfb31d2661c04c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758930169665723974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-74fdn,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 930aa1d0-38cf-4e8b-8d24-e674f37f457b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44947fdc81a0b7b0f80270fbf051c9cb1b239434ca02be9c632ac93d614f6b32,PodSandboxId:6838664d8ef4cf86d43524b0dee7eb55bb1570912ebe69c93b79ee1323948460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758930168428271595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2s884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75724a7941be277447f415fe53c5d8c9819d6e0af08d8f8498e83cdd5e272c84,PodSandboxId:0360178764e4aa305c53c0f01568555f409eb4344fe4f1b29d7084d36715d9b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758930168619834438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a45e85e56140d091175e85ada12059,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55b9eb9502426de9a5e026b5fc80072b333fd93b93de92b2eeefd78e8612539,PodSandboxId:25ba2c7c4ad25a5eca792bf48599310d1d34fd1e723563774f0063b34dfb8893,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5
864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758930168552716683,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67f6cccd1e1255753574888d6b0323d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb5d37736bbcd89b18982790b72506387ebcf54266e25bad635b4756357149c,PodSandboxId:328cd1b818aa760801a7f30787c63006341f23430ad6c18a6bc5fad6d47127a2,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1758930168477838140,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f67e4decc7a054ae1e94a2b570f4fc,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1567d2e11655dc909f2f494668cacd1951e28df8f20614765d182ca48a
60ecb5,PodSandboxId:c50d1f9da7946adf9433685602ae2de060ecd12aab963fdd8882343b06e29719,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758930168385646951,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c237e844ef8ee507d46fbb3a8e46be9,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51dc69520ea563551ffb542f8acc0a9060967383c75e5f980c2b4882cd666437,PodSandboxId:3de365eb9d21bee6f177fec83f72efc709917a5a0ff5ef4847f271796564e572,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758930113041881914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-74fdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930aa1d0-38cf-4e8b-8d24-e674f37f457b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b15910803a545e4a869d6f43ce6b53ef32a6a2034fa3c82a31296891c2caa16,PodSandboxId:4c31f34cff33baa7ff084ca213c09415014ab78d37782971e247f37439db6534,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758930098492232379,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-298014
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67f6cccd1e1255753574888d6b0323d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4291340e3901ff6ccd5af3a77e5ec33802ebc6e8e947a6db005215772968da3c,PodSandboxId:0903619f0bfa10f47049d7f4c8e5204a5e506aa1d1b01ca985c8fde7e4350ef7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758930098481134133,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a45e85e56140d091175e85ada12059,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bcbf6219-7147-4075-92d0-44ffe63b1fc4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.572237692Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d282514-c3ff-4efd-aa8f-b197bd414330 name=/runtime.v1.RuntimeService/Version
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.572387987Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d282514-c3ff-4efd-aa8f-b197bd414330 name=/runtime.v1.RuntimeService/Version
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.574659032Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1c42be04-b676-4dbd-8782-ffdb3d561408 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.575575407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758930195575252868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c42be04-b676-4dbd-8782-ffdb3d561408 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.577090233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92fa7947-1672-403f-871a-58e7bb8ee1d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.577162728Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92fa7947-1672-403f-871a-58e7bb8ee1d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.577503843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d68eba28720931f9bea435ef92878338e4044eb5b887c90cddd259d211ac054c,PodSandboxId:6838664d8ef4cf86d43524b0dee7eb55bb1570912ebe69c93b79ee1323948460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758930179539264204,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2s884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67bc6f73e4cc6a7047607766c90a393303e75e9841c7ad5f4a29a09e5b17ac9e,PodSandboxId:c50d1f9da7946adf9433685602ae2de060ecd12aab963fdd8882343b06e29719,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758930175731757529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c237e844ef8ee507d46fbb3a8e46be9,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257
,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c4b0724f9fc00bc656ba9261b9265a46b53536eed5e7fdc475aa37bd2a71193,PodSandboxId:328cd1b818aa760801a7f30787c63006341f23430ad6c18a6bc5fad6d47127a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758930175721066659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f67e4decc7a054ae1e94a2b570f4fc,},Annotations:map[string]string{io.kubern
etes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c30e76a20c21a36997ee2442af351e90271cd2bca90ad6e7e5235fa4b9490e,PodSandboxId:28be4bf6eed2d2025aabc479240b8901e2ce0b868efd70bdc9bfb31d2661c04c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758930169665723974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-74fdn,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 930aa1d0-38cf-4e8b-8d24-e674f37f457b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44947fdc81a0b7b0f80270fbf051c9cb1b239434ca02be9c632ac93d614f6b32,PodSandboxId:6838664d8ef4cf86d43524b0dee7eb55bb1570912ebe69c93b79ee1323948460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758930168428271595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2s884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75724a7941be277447f415fe53c5d8c9819d6e0af08d8f8498e83cdd5e272c84,PodSandboxId:0360178764e4aa305c53c0f01568555f409eb4344fe4f1b29d7084d36715d9b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758930168619834438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a45e85e56140d091175e85ada12059,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55b9eb9502426de9a5e026b5fc80072b333fd93b93de92b2eeefd78e8612539,PodSandboxId:25ba2c7c4ad25a5eca792bf48599310d1d34fd1e723563774f0063b34dfb8893,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5
864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758930168552716683,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67f6cccd1e1255753574888d6b0323d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb5d37736bbcd89b18982790b72506387ebcf54266e25bad635b4756357149c,PodSandboxId:328cd1b818aa760801a7f30787c63006341f23430ad6c18a6bc5fad6d47127a2,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1758930168477838140,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f67e4decc7a054ae1e94a2b570f4fc,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1567d2e11655dc909f2f494668cacd1951e28df8f20614765d182ca48a
60ecb5,PodSandboxId:c50d1f9da7946adf9433685602ae2de060ecd12aab963fdd8882343b06e29719,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758930168385646951,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c237e844ef8ee507d46fbb3a8e46be9,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51dc69520ea563551ffb542f8acc0a9060967383c75e5f980c2b4882cd666437,PodSandboxId:3de365eb9d21bee6f177fec83f72efc709917a5a0ff5ef4847f271796564e572,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758930113041881914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-74fdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930aa1d0-38cf-4e8b-8d24-e674f37f457b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b15910803a545e4a869d6f43ce6b53ef32a6a2034fa3c82a31296891c2caa16,PodSandboxId:4c31f34cff33baa7ff084ca213c09415014ab78d37782971e247f37439db6534,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758930098492232379,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-298014
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67f6cccd1e1255753574888d6b0323d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4291340e3901ff6ccd5af3a77e5ec33802ebc6e8e947a6db005215772968da3c,PodSandboxId:0903619f0bfa10f47049d7f4c8e5204a5e506aa1d1b01ca985c8fde7e4350ef7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758930098481134133,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a45e85e56140d091175e85ada12059,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92fa7947-1672-403f-871a-58e7bb8ee1d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.628850260Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f51cc70-674c-4e47-8c93-5145717d8954 name=/runtime.v1.RuntimeService/Version
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.628956932Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f51cc70-674c-4e47-8c93-5145717d8954 name=/runtime.v1.RuntimeService/Version
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.630775973Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a1b5559-38d0-4d37-b67c-9cd5cb1cc904 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.631662500Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758930195631575411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a1b5559-38d0-4d37-b67c-9cd5cb1cc904 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.632270093Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e23c3b7c-56d3-44f6-a8fb-4c30a5c2f158 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.632539028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e23c3b7c-56d3-44f6-a8fb-4c30a5c2f158 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.633401526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d68eba28720931f9bea435ef92878338e4044eb5b887c90cddd259d211ac054c,PodSandboxId:6838664d8ef4cf86d43524b0dee7eb55bb1570912ebe69c93b79ee1323948460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758930179539264204,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2s884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67bc6f73e4cc6a7047607766c90a393303e75e9841c7ad5f4a29a09e5b17ac9e,PodSandboxId:c50d1f9da7946adf9433685602ae2de060ecd12aab963fdd8882343b06e29719,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758930175731757529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c237e844ef8ee507d46fbb3a8e46be9,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257
,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c4b0724f9fc00bc656ba9261b9265a46b53536eed5e7fdc475aa37bd2a71193,PodSandboxId:328cd1b818aa760801a7f30787c63006341f23430ad6c18a6bc5fad6d47127a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758930175721066659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f67e4decc7a054ae1e94a2b570f4fc,},Annotations:map[string]string{io.kubern
etes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c30e76a20c21a36997ee2442af351e90271cd2bca90ad6e7e5235fa4b9490e,PodSandboxId:28be4bf6eed2d2025aabc479240b8901e2ce0b868efd70bdc9bfb31d2661c04c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758930169665723974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-74fdn,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 930aa1d0-38cf-4e8b-8d24-e674f37f457b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44947fdc81a0b7b0f80270fbf051c9cb1b239434ca02be9c632ac93d614f6b32,PodSandboxId:6838664d8ef4cf86d43524b0dee7eb55bb1570912ebe69c93b79ee1323948460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758930168428271595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2s884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75724a7941be277447f415fe53c5d8c9819d6e0af08d8f8498e83cdd5e272c84,PodSandboxId:0360178764e4aa305c53c0f01568555f409eb4344fe4f1b29d7084d36715d9b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758930168619834438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a45e85e56140d091175e85ada12059,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55b9eb9502426de9a5e026b5fc80072b333fd93b93de92b2eeefd78e8612539,PodSandboxId:25ba2c7c4ad25a5eca792bf48599310d1d34fd1e723563774f0063b34dfb8893,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5
864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758930168552716683,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67f6cccd1e1255753574888d6b0323d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb5d37736bbcd89b18982790b72506387ebcf54266e25bad635b4756357149c,PodSandboxId:328cd1b818aa760801a7f30787c63006341f23430ad6c18a6bc5fad6d47127a2,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1758930168477838140,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f67e4decc7a054ae1e94a2b570f4fc,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1567d2e11655dc909f2f494668cacd1951e28df8f20614765d182ca48a
60ecb5,PodSandboxId:c50d1f9da7946adf9433685602ae2de060ecd12aab963fdd8882343b06e29719,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758930168385646951,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c237e844ef8ee507d46fbb3a8e46be9,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51dc69520ea563551ffb542f8acc0a9060967383c75e5f980c2b4882cd666437,PodSandboxId:3de365eb9d21bee6f177fec83f72efc709917a5a0ff5ef4847f271796564e572,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758930113041881914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-74fdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930aa1d0-38cf-4e8b-8d24-e674f37f457b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b15910803a545e4a869d6f43ce6b53ef32a6a2034fa3c82a31296891c2caa16,PodSandboxId:4c31f34cff33baa7ff084ca213c09415014ab78d37782971e247f37439db6534,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758930098492232379,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-298014
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67f6cccd1e1255753574888d6b0323d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4291340e3901ff6ccd5af3a77e5ec33802ebc6e8e947a6db005215772968da3c,PodSandboxId:0903619f0bfa10f47049d7f4c8e5204a5e506aa1d1b01ca985c8fde7e4350ef7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758930098481134133,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a45e85e56140d091175e85ada12059,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e23c3b7c-56d3-44f6-a8fb-4c30a5c2f158 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.690065123Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3054fb4f-9364-48eb-a93c-a21b79b0117d name=/runtime.v1.RuntimeService/Version
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.690221136Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3054fb4f-9364-48eb-a93c-a21b79b0117d name=/runtime.v1.RuntimeService/Version
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.692605977Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf9e5e26-9084-42e1-88b6-c0bcba09623d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.693070385Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758930195693048954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf9e5e26-9084-42e1-88b6-c0bcba09623d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.693795163Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8361067-c0b9-4086-8323-3a1e79cf60e6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.693893167Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8361067-c0b9-4086-8323-3a1e79cf60e6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:15 pause-298014 crio[2823]: time="2025-09-26 23:43:15.694120651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d68eba28720931f9bea435ef92878338e4044eb5b887c90cddd259d211ac054c,PodSandboxId:6838664d8ef4cf86d43524b0dee7eb55bb1570912ebe69c93b79ee1323948460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758930179539264204,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2s884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67bc6f73e4cc6a7047607766c90a393303e75e9841c7ad5f4a29a09e5b17ac9e,PodSandboxId:c50d1f9da7946adf9433685602ae2de060ecd12aab963fdd8882343b06e29719,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758930175731757529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c237e844ef8ee507d46fbb3a8e46be9,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257
,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c4b0724f9fc00bc656ba9261b9265a46b53536eed5e7fdc475aa37bd2a71193,PodSandboxId:328cd1b818aa760801a7f30787c63006341f23430ad6c18a6bc5fad6d47127a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758930175721066659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f67e4decc7a054ae1e94a2b570f4fc,},Annotations:map[string]string{io.kubern
etes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c30e76a20c21a36997ee2442af351e90271cd2bca90ad6e7e5235fa4b9490e,PodSandboxId:28be4bf6eed2d2025aabc479240b8901e2ce0b868efd70bdc9bfb31d2661c04c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758930169665723974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-74fdn,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 930aa1d0-38cf-4e8b-8d24-e674f37f457b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44947fdc81a0b7b0f80270fbf051c9cb1b239434ca02be9c632ac93d614f6b32,PodSandboxId:6838664d8ef4cf86d43524b0dee7eb55bb1570912ebe69c93b79ee1323948460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758930168428271595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2s884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75724a7941be277447f415fe53c5d8c9819d6e0af08d8f8498e83cdd5e272c84,PodSandboxId:0360178764e4aa305c53c0f01568555f409eb4344fe4f1b29d7084d36715d9b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758930168619834438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a45e85e56140d091175e85ada12059,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55b9eb9502426de9a5e026b5fc80072b333fd93b93de92b2eeefd78e8612539,PodSandboxId:25ba2c7c4ad25a5eca792bf48599310d1d34fd1e723563774f0063b34dfb8893,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5
864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758930168552716683,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67f6cccd1e1255753574888d6b0323d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb5d37736bbcd89b18982790b72506387ebcf54266e25bad635b4756357149c,PodSandboxId:328cd1b818aa760801a7f30787c63006341f23430ad6c18a6bc5fad6d47127a2,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1758930168477838140,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f67e4decc7a054ae1e94a2b570f4fc,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1567d2e11655dc909f2f494668cacd1951e28df8f20614765d182ca48a
60ecb5,PodSandboxId:c50d1f9da7946adf9433685602ae2de060ecd12aab963fdd8882343b06e29719,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758930168385646951,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c237e844ef8ee507d46fbb3a8e46be9,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51dc69520ea563551ffb542f8acc0a9060967383c75e5f980c2b4882cd666437,PodSandboxId:3de365eb9d21bee6f177fec83f72efc709917a5a0ff5ef4847f271796564e572,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758930113041881914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-74fdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930aa1d0-38cf-4e8b-8d24-e674f37f457b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b15910803a545e4a869d6f43ce6b53ef32a6a2034fa3c82a31296891c2caa16,PodSandboxId:4c31f34cff33baa7ff084ca213c09415014ab78d37782971e247f37439db6534,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758930098492232379,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-298014
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67f6cccd1e1255753574888d6b0323d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4291340e3901ff6ccd5af3a77e5ec33802ebc6e8e947a6db005215772968da3c,PodSandboxId:0903619f0bfa10f47049d7f4c8e5204a5e506aa1d1b01ca985c8fde7e4350ef7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758930098481134133,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a45e85e56140d091175e85ada12059,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8361067-c0b9-4086-8323-3a1e79cf60e6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d68eba2872093       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   16 seconds ago       Running             kube-proxy                2                   6838664d8ef4c       kube-proxy-2s884
	67bc6f73e4cc6       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   20 seconds ago       Running             kube-controller-manager   2                   c50d1f9da7946       kube-controller-manager-pause-298014
	9c4b0724f9fc0       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   20 seconds ago       Running             kube-apiserver            2                   328cd1b818aa7       kube-apiserver-pause-298014
	99c30e76a20c2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   26 seconds ago       Running             coredns                   1                   28be4bf6eed2d       coredns-66bc5c9577-74fdn
	75724a7941be2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   27 seconds ago       Running             etcd                      1                   0360178764e4a       etcd-pause-298014
	a55b9eb950242       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   27 seconds ago       Running             kube-scheduler            1                   25ba2c7c4ad25       kube-scheduler-pause-298014
	0eb5d37736bbc       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   27 seconds ago       Exited              kube-apiserver            1                   328cd1b818aa7       kube-apiserver-pause-298014
	44947fdc81a0b       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   27 seconds ago       Exited              kube-proxy                1                   6838664d8ef4c       kube-proxy-2s884
	1567d2e11655d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   27 seconds ago       Exited              kube-controller-manager   1                   c50d1f9da7946       kube-controller-manager-pause-298014
	51dc69520ea56       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   3de365eb9d21b       coredns-66bc5c9577-74fdn
	2b15910803a54       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   About a minute ago   Exited              kube-scheduler            0                   4c31f34cff33b       kube-scheduler-pause-298014
	4291340e3901f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Exited              etcd                      0                   0903619f0bfa1       etcd-pause-298014
	
	
	==> coredns [51dc69520ea563551ffb542f8acc0a9060967383c75e5f980c2b4882cd666437] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	[INFO] Reloading complete
	[INFO] 127.0.0.1:39755 - 40679 "HINFO IN 8554951345633849584.1848918287105691358. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020050927s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [99c30e76a20c21a36997ee2442af351e90271cd2bca90ad6e7e5235fa4b9490e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] 127.0.0.1:60138 - 52575 "HINFO IN 2465005106924278844.7349295519571415587. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012179428s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               pause-298014
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-298014
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=pause-298014
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T23_41_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 23:41:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-298014
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 23:43:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 23:42:58 +0000   Fri, 26 Sep 2025 23:41:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 23:42:58 +0000   Fri, 26 Sep 2025 23:41:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 23:42:58 +0000   Fri, 26 Sep 2025 23:41:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 23:42:58 +0000   Fri, 26 Sep 2025 23:41:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.242
	  Hostname:    pause-298014
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 a57c05bd83b1481b8bb3b7452b744da5
	  System UUID:                a57c05bd-83b1-481b-8bb3-b7452b744da5
	  Boot ID:                    0485fae6-db91-4be6-a593-b2700010b548
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-74fdn                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     85s
	  kube-system                 etcd-pause-298014                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         90s
	  kube-system                 kube-apiserver-pause-298014             250m (12%)    0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-controller-manager-pause-298014    200m (10%)    0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-proxy-2s884                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-scheduler-pause-298014             100m (5%)     0 (0%)      0 (0%)           0 (0%)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 83s                kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  NodeHasSufficientPID     90s                kubelet          Node pause-298014 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  90s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  90s                kubelet          Node pause-298014 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    90s                kubelet          Node pause-298014 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                90s                kubelet          Node pause-298014 status is now: NodeReady
	  Normal  Starting                 90s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           86s                node-controller  Node pause-298014 event: Registered Node pause-298014 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node pause-298014 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node pause-298014 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node pause-298014 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                node-controller  Node pause-298014 event: Registered Node pause-298014 in Controller
	
	
	==> dmesg <==
	[Sep26 23:41] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000077] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005185] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.520681] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.086956] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.151630] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.165735] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.024894] kauditd_printk_skb: 18 callbacks suppressed
	[Sep26 23:42] kauditd_printk_skb: 219 callbacks suppressed
	[ +25.937198] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.127220] kauditd_printk_skb: 319 callbacks suppressed
	[Sep26 23:43] kauditd_printk_skb: 63 callbacks suppressed
	[  +4.763759] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [4291340e3901ff6ccd5af3a77e5ec33802ebc6e8e947a6db005215772968da3c] <==
	{"level":"warn","ts":"2025-09-26T23:41:43.581796Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.215037ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-09-26T23:41:43.581915Z","caller":"traceutil/trace.go:172","msg":"trace[1582729659] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:70; }","duration":"153.357746ms","start":"2025-09-26T23:41:43.428546Z","end":"2025-09-26T23:41:43.581904Z","steps":["trace[1582729659] 'range keys from in-memory index tree'  (duration: 143.149961ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T23:41:43.584212Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.147518ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1763891015197594084 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:discovery\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:discovery\" value_size:587 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-09-26T23:41:43.584482Z","caller":"traceutil/trace.go:172","msg":"trace[1098904871] transaction","detail":"{read_only:false; response_revision:71; number_of_response:1; }","duration":"203.088579ms","start":"2025-09-26T23:41:43.381381Z","end":"2025-09-26T23:41:43.584469Z","steps":["trace[1098904871] 'process raft request'  (duration: 57.2432ms)","trace[1098904871] 'compare'  (duration: 143.00479ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-26T23:41:43.780399Z","caller":"traceutil/trace.go:172","msg":"trace[79517878] transaction","detail":"{read_only:false; response_revision:72; number_of_response:1; }","duration":"189.614813ms","start":"2025-09-26T23:41:43.590651Z","end":"2025-09-26T23:41:43.780266Z","steps":["trace[79517878] 'process raft request'  (duration: 128.195699ms)","trace[79517878] 'compare'  (duration: 61.32184ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T23:42:26.916430Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.693061ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T23:42:26.916510Z","caller":"traceutil/trace.go:172","msg":"trace[1775930061] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:402; }","duration":"120.83423ms","start":"2025-09-26T23:42:26.795661Z","end":"2025-09-26T23:42:26.916496Z","steps":["trace[1775930061] 'range keys from in-memory index tree'  (duration: 120.555635ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:42:38.438240Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-26T23:42:38.438435Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-298014","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.242:2380"],"advertise-client-urls":["https://192.168.83.242:2379"]}
	{"level":"error","ts":"2025-09-26T23:42:38.438555Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T23:42:38.452515Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-09-26T23:42:38.529651Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.242:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T23:42:38.530034Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.242:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-26T23:42:38.530175Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.242:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-26T23:42:38.529835Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T23:42:38.529873Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"35987a252efe187a","current-leader-member-id":"35987a252efe187a"}
	{"level":"warn","ts":"2025-09-26T23:42:38.530006Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T23:42:38.530437Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-26T23:42:38.530448Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-09-26T23:42:38.530453Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T23:42:38.530464Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-26T23:42:38.534200Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.83.242:2380"}
	{"level":"error","ts":"2025-09-26T23:42:38.534383Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.242:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T23:42:38.534429Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.83.242:2380"}
	{"level":"info","ts":"2025-09-26T23:42:38.534441Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-298014","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.242:2380"],"advertise-client-urls":["https://192.168.83.242:2379"]}
	
	
	==> etcd [75724a7941be277447f415fe53c5d8c9819d6e0af08d8f8498e83cdd5e272c84] <==
	{"level":"warn","ts":"2025-09-26T23:42:57.473144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.489802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.499037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.506413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.533562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.544001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.551401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.562336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.569800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.579455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.588364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.596351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.608395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.616403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.627939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.636608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.646355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.654687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.664652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.672489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.683588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.702482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.712552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.722083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.797527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43806","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:43:16 up 2 min,  0 users,  load average: 1.61, 0.63, 0.23
	Linux pause-298014 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0eb5d37736bbcd89b18982790b72506387ebcf54266e25bad635b4756357149c] <==
	E0926 23:42:52.448078       1 reflector.go:205] "Failed to watch" err="Get \"https://localhost:8443/api/v1/secrets?allowWatchBookmarks=true&resourceVersion=411&timeout=6m12s&timeoutSeconds=372&watch=true\": context canceled" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Secret"
	E0926 23:42:52.448143       1 reflector.go:205] "Failed to watch" err="Get \"https://localhost:8443/apis/admissionregistration.k8s.io/v1/validatingadmissionpolicies?allowWatchBookmarks=true&resourceVersion=411&timeout=6m40s&timeoutSeconds=400&watch=true\": context canceled" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ValidatingAdmissionPolicy"
	E0926 23:42:52.448191       1 reflector.go:205] "Failed to watch" err="Get \"https://localhost:8443/apis/networking.k8s.io/v1/ingressclasses?allowWatchBookmarks=true&resourceVersion=411&timeout=7m50s&timeoutSeconds=470&watch=true\": context canceled" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.IngressClass"
	E0926 23:42:52.448243       1 reflector.go:205] "Failed to watch" err="Get \"https://localhost:8443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=411&timeout=5m56s&timeoutSeconds=356&watch=true\": context canceled" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceQuota"
	I0926 23:42:52.448370       1 crd_finalizer.go:273] Shutting down CRDFinalizer
	E0926 23:42:52.448464       1 reflector.go:205] "Failed to watch" err="Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles?allowWatchBookmarks=true&resourceVersion=411&timeout=7m12s&timeoutSeconds=432&watch=true\": context canceled" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ClusterRole"
	E0926 23:42:52.448501       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for RemoteAvailability controller" logger="UnhandledError"
	E0926 23:42:52.448525       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="configmaps"
	E0926 23:42:52.448548       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="kubernetes-service-cidr-controller"
	E0926 23:42:52.448569       1 system_namespaces_controller.go:69] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	F0926 23:42:52.448609       1 hooks.go:204] PostStartHook "priority-and-fairness-config-producer" failed: APF bootstrap ensurer timed out waiting for cache sync
	I0926 23:42:52.555634       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	I0926 23:42:52.555719       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0926 23:42:52.555789       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0926 23:42:52.555831       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0926 23:42:52.557648       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0926 23:42:52.557695       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0926 23:42:52.557706       1 policy_source.go:240] refreshing policies
	E0926 23:42:52.557749       1 plugin.go:185] "Unhandled Error" err="policy source context unexpectedly closed: handler {0x1e0c480 0x1e0c460 0x1e0c440} was not added to shared informer because it has stopped already" logger="UnhandledError"
	I0926 23:42:52.557898       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0926 23:42:52.558229       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0926 23:42:52.559429       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0926 23:42:52.559459       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0926 23:42:52.559477       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0926 23:42:52.561251       1 reflector.go:205] "Failed to watch" err="Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=411&timeout=5m48s&timeoutSeconds=348&watch=true\": context canceled" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RoleBinding"
	
	
	==> kube-apiserver [9c4b0724f9fc00bc656ba9261b9265a46b53536eed5e7fdc475aa37bd2a71193] <==
	I0926 23:42:58.650720       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0926 23:42:58.650729       1 cache.go:39] Caches are synced for autoregister controller
	I0926 23:42:58.653557       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0926 23:42:58.654339       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0926 23:42:58.654455       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I0926 23:42:58.655055       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0926 23:42:58.660494       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0926 23:42:58.660544       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0926 23:42:58.661545       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I0926 23:42:58.661613       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0926 23:42:58.672619       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0926 23:42:58.675128       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0926 23:42:58.675339       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I0926 23:42:58.684195       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0926 23:42:58.684405       1 policy_source.go:240] refreshing policies
	I0926 23:42:58.693856       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0926 23:42:59.270678       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0926 23:42:59.360404       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0926 23:43:00.095771       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0926 23:43:00.157686       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0926 23:43:00.197546       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0926 23:43:00.205508       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0926 23:43:02.005270       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0926 23:43:02.248403       1 controller.go:667] quota admission added evaluator for: endpoints
	I0926 23:43:06.807659       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [1567d2e11655dc909f2f494668cacd1951e28df8f20614765d182ca48a60ecb5] <==
	I0926 23:42:50.479346       1 serving.go:386] Generated self-signed cert in-memory
	I0926 23:42:51.311160       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0926 23:42:51.311199       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 23:42:51.313714       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0926 23:42:51.313819       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0926 23:42:51.314428       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0926 23:42:51.314917       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [67bc6f73e4cc6a7047607766c90a393303e75e9841c7ad5f4a29a09e5b17ac9e] <==
	I0926 23:43:02.025583       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0926 23:43:02.026872       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0926 23:43:02.028366       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0926 23:43:02.032782       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0926 23:43:02.037990       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0926 23:43:02.041678       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0926 23:43:02.041900       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0926 23:43:02.041937       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0926 23:43:02.041941       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0926 23:43:02.041947       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0926 23:43:02.043840       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0926 23:43:02.043908       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0926 23:43:02.044009       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0926 23:43:02.044141       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0926 23:43:02.044142       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0926 23:43:02.044242       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-298014"
	I0926 23:43:02.044382       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0926 23:43:02.044472       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0926 23:43:02.043897       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0926 23:43:02.044676       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0926 23:43:02.044873       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0926 23:43:02.045130       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0926 23:43:02.049262       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0926 23:43:02.049673       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0926 23:43:02.050200       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	
	
	==> kube-proxy [44947fdc81a0b7b0f80270fbf051c9cb1b239434ca02be9c632ac93d614f6b32] <==
	I0926 23:42:50.224500       1 server_linux.go:53] "Using iptables proxy"
	I0926 23:42:51.081707       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	
	
	==> kube-proxy [d68eba28720931f9bea435ef92878338e4044eb5b887c90cddd259d211ac054c] <==
	I0926 23:42:59.758940       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 23:42:59.859749       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 23:42:59.859812       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.242"]
	E0926 23:42:59.859923       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 23:42:59.921516       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0926 23:42:59.921636       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0926 23:42:59.921658       1 server_linux.go:132] "Using iptables Proxier"
	I0926 23:42:59.948938       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 23:42:59.949337       1 server.go:527] "Version info" version="v1.34.0"
	I0926 23:42:59.949352       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 23:42:59.954685       1 config.go:200] "Starting service config controller"
	I0926 23:42:59.954697       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 23:42:59.954714       1 config.go:106] "Starting endpoint slice config controller"
	I0926 23:42:59.954717       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 23:42:59.954726       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 23:42:59.954729       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 23:42:59.956117       1 config.go:309] "Starting node config controller"
	I0926 23:42:59.958081       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 23:42:59.958132       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 23:43:00.055843       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 23:43:00.055957       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 23:43:00.056251       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2b15910803a545e4a869d6f43ce6b53ef32a6a2034fa3c82a31296891c2caa16] <==
	E0926 23:41:43.028680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 23:41:43.028866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 23:41:43.029228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 23:41:43.029420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 23:41:43.927488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 23:41:43.956779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 23:41:44.008183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 23:41:44.071914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 23:41:44.132557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 23:41:44.196987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 23:41:44.231009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 23:41:44.257800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0926 23:41:44.265561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 23:41:44.268166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 23:41:44.341165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 23:41:44.370389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 23:41:44.395930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 23:41:44.406676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I0926 23:41:45.915409       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 23:42:38.446202       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0926 23:42:38.446271       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0926 23:42:38.451526       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0926 23:42:38.451618       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 23:42:38.451879       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0926 23:42:38.451946       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a55b9eb9502426de9a5e026b5fc80072b333fd93b93de92b2eeefd78e8612539] <==
	E0926 23:42:54.754250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.83.242:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.83.242:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 23:42:54.953080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.83.242:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.83.242:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 23:42:55.081464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.83.242:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.83.242:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 23:42:55.164576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.83.242:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.83.242:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 23:42:55.165855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.83.242:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.83.242:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 23:42:58.521383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0926 23:42:58.523831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 23:42:58.524041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 23:42:58.524113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 23:42:58.524273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 23:42:58.524399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 23:42:58.524459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 23:42:58.524518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 23:42:58.524574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 23:42:58.524643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 23:42:58.524692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 23:42:58.524748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 23:42:58.524804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 23:42:58.524862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 23:42:58.524922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 23:42:58.524992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 23:42:58.525052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 23:42:58.525115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 23:42:58.525178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I0926 23:43:03.768577       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 26 23:42:57 pause-298014 kubelet[3787]: E0926 23:42:57.442731    3787 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-298014\" not found" node="pause-298014"
	Sep 26 23:42:57 pause-298014 kubelet[3787]: E0926 23:42:57.442798    3787 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-298014\" not found" node="pause-298014"
	Sep 26 23:42:57 pause-298014 kubelet[3787]: E0926 23:42:57.444480    3787 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-298014\" not found" node="pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: E0926 23:42:58.447126    3787 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-298014\" not found" node="pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: I0926 23:42:58.537835    3787 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: E0926 23:42:58.765564    3787 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-298014\" already exists" pod="kube-system/kube-scheduler-pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: I0926 23:42:58.765627    3787 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: E0926 23:42:58.778955    3787 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-298014\" already exists" pod="kube-system/etcd-pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: I0926 23:42:58.779017    3787 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: E0926 23:42:58.795848    3787 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-298014\" already exists" pod="kube-system/kube-apiserver-pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: I0926 23:42:58.795971    3787 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: I0926 23:42:58.802069    3787 kubelet_node_status.go:124] "Node was previously registered" node="pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: I0926 23:42:58.802154    3787 kubelet_node_status.go:78] "Successfully registered node" node="pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: I0926 23:42:58.802180    3787 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: I0926 23:42:58.803703    3787 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: E0926 23:42:58.809858    3787 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-298014\" already exists" pod="kube-system/kube-controller-manager-pause-298014"
	Sep 26 23:42:59 pause-298014 kubelet[3787]: I0926 23:42:59.218915    3787 apiserver.go:52] "Watching apiserver"
	Sep 26 23:42:59 pause-298014 kubelet[3787]: I0926 23:42:59.240783    3787 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 26 23:42:59 pause-298014 kubelet[3787]: I0926 23:42:59.264805    3787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43-xtables-lock\") pod \"kube-proxy-2s884\" (UID: \"eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43\") " pod="kube-system/kube-proxy-2s884"
	Sep 26 23:42:59 pause-298014 kubelet[3787]: I0926 23:42:59.266220    3787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43-lib-modules\") pod \"kube-proxy-2s884\" (UID: \"eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43\") " pod="kube-system/kube-proxy-2s884"
	Sep 26 23:42:59 pause-298014 kubelet[3787]: I0926 23:42:59.523744    3787 scope.go:117] "RemoveContainer" containerID="44947fdc81a0b7b0f80270fbf051c9cb1b239434ca02be9c632ac93d614f6b32"
	Sep 26 23:43:05 pause-298014 kubelet[3787]: E0926 23:43:05.398522    3787 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758930185397659925  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 26 23:43:05 pause-298014 kubelet[3787]: E0926 23:43:05.398579    3787 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758930185397659925  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 26 23:43:15 pause-298014 kubelet[3787]: E0926 23:43:15.402169    3787 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758930195400774313  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 26 23:43:15 pause-298014 kubelet[3787]: E0926 23:43:15.402211    3787 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758930195400774313  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-298014 -n pause-298014
helpers_test.go:269: (dbg) Run:  kubectl --context pause-298014 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-298014 -n pause-298014
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-298014 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-298014 logs -n 25: (1.710480551s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                               │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-421834 sudo systemctl status kubelet --all --full --no-pager                                                           │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo systemctl cat kubelet --no-pager                                                                           │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo journalctl -xeu kubelet --all --full --no-pager                                                            │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo cat /etc/kubernetes/kubelet.conf                                                                           │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo cat /var/lib/kubelet/config.yaml                                                                           │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo systemctl status docker --all --full --no-pager                                                            │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo systemctl cat docker --no-pager                                                                            │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo cat /etc/docker/daemon.json                                                                                │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo docker system info                                                                                         │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo systemctl status cri-docker --all --full --no-pager                                                        │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo systemctl cat cri-docker --no-pager                                                                        │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                   │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo cat /usr/lib/systemd/system/cri-docker.service                                                             │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo cri-dockerd --version                                                                                      │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo systemctl status containerd --all --full --no-pager                                                        │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo systemctl cat containerd --no-pager                                                                        │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo cat /lib/systemd/system/containerd.service                                                                 │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo cat /etc/containerd/config.toml                                                                            │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo containerd config dump                                                                                     │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo systemctl status crio --all --full --no-pager                                                              │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo systemctl cat crio --no-pager                                                                              │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                    │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ ssh     │ -p cilium-421834 sudo crio config                                                                                                │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	│ delete  │ -p cilium-421834                                                                                                                 │ cilium-421834          │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │ 26 Sep 25 23:42 UTC │
	│ start   │ -p cert-expiration-648174 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ cert-expiration-648174 │ jenkins │ v1.37.0 │ 26 Sep 25 23:42 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 23:42:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 23:42:51.576944   51192 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:42:51.577236   51192 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:42:51.577242   51192 out.go:374] Setting ErrFile to fd 2...
	I0926 23:42:51.577249   51192 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:42:51.578911   51192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	I0926 23:42:51.579395   51192 out.go:368] Setting JSON to false
	I0926 23:42:51.580288   51192 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5117,"bootTime":1758925055,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 23:42:51.580368   51192 start.go:140] virtualization: kvm guest
	I0926 23:42:51.586104   51192 out.go:179] * [cert-expiration-648174] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 23:42:51.587814   51192 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 23:42:51.587859   51192 notify.go:220] Checking for updates...
	I0926 23:42:51.590199   51192 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 23:42:51.591330   51192 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 23:42:51.592336   51192 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 23:42:51.593481   51192 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 23:42:51.594643   51192 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 23:42:51.596402   51192 config.go:182] Loaded profile config "force-systemd-env-429303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:42:51.596611   51192 config.go:182] Loaded profile config "pause-298014": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:42:51.596736   51192 config.go:182] Loaded profile config "stopped-upgrade-217447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I0926 23:42:51.596881   51192 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 23:42:51.629717   51192 out.go:179] * Using the kvm2 driver based on user configuration
	I0926 23:42:51.630817   51192 start.go:304] selected driver: kvm2
	I0926 23:42:51.630839   51192 start.go:924] validating driver "kvm2" against <nil>
	I0926 23:42:51.630861   51192 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 23:42:51.631912   51192 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 23:42:51.631997   51192 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 23:42:51.647291   51192 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 23:42:51.647315   51192 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 23:42:51.662100   51192 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 23:42:51.662133   51192 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 23:42:51.662379   51192 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0926 23:42:51.662400   51192 cni.go:84] Creating CNI manager for ""
	I0926 23:42:51.662439   51192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 23:42:51.662445   51192 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 23:42:51.662494   51192 start.go:348] cluster config:
	{Name:cert-expiration-648174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:cert-expiration-648174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 23:42:51.662574   51192 iso.go:125] acquiring lock: {Name:mk665cb8117fd96bfc46b1e5a29611848cf59d97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 23:42:51.664937   51192 out.go:179] * Starting "cert-expiration-648174" primary control-plane node in "cert-expiration-648174" cluster
	I0926 23:42:47.444965   50469 preload.go:131] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I0926 23:42:47.445016   50469 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I0926 23:42:47.445024   50469 cache.go:58] Caching tarball of preloaded images
	I0926 23:42:47.445101   50469 preload.go:172] Found /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0926 23:42:47.445111   50469 cache.go:61] Finished verifying existence of preloaded tar for v1.28.3 on crio
	I0926 23:42:47.445205   50469 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/stopped-upgrade-217447/config.json ...
	I0926 23:42:47.445461   50469 start.go:360] acquireMachinesLock for stopped-upgrade-217447: {Name:mk2abc374bcfc09d0b998f1b70bb443182c23d46 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 23:42:48.266215   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:42:48.266978   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | no network interface addresses found for domain force-systemd-env-429303 (source=lease)
	I0926 23:42:48.267010   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | trying to list again with source=arp
	I0926 23:42:48.267347   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | unable to find current IP address of domain force-systemd-env-429303 in network mk-force-systemd-env-429303 (interfaces detected: [])
	I0926 23:42:48.267403   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | I0926 23:42:48.267323   49836 retry.go:31] will retry after 493.294397ms: waiting for domain to come up
	I0926 23:42:48.762048   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:42:48.762702   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | no network interface addresses found for domain force-systemd-env-429303 (source=lease)
	I0926 23:42:48.762725   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | trying to list again with source=arp
	I0926 23:42:48.763140   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | unable to find current IP address of domain force-systemd-env-429303 in network mk-force-systemd-env-429303 (interfaces detected: [])
	I0926 23:42:48.763172   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | I0926 23:42:48.763121   49836 retry.go:31] will retry after 842.369329ms: waiting for domain to come up
	I0926 23:42:49.608053   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:42:49.608869   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | no network interface addresses found for domain force-systemd-env-429303 (source=lease)
	I0926 23:42:49.608897   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | trying to list again with source=arp
	I0926 23:42:49.609190   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | unable to find current IP address of domain force-systemd-env-429303 in network mk-force-systemd-env-429303 (interfaces detected: [])
	I0926 23:42:49.610355   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | I0926 23:42:49.610252   49836 retry.go:31] will retry after 779.366798ms: waiting for domain to come up
	I0926 23:42:50.391116   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:42:50.391799   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | no network interface addresses found for domain force-systemd-env-429303 (source=lease)
	I0926 23:42:50.391838   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | trying to list again with source=arp
	I0926 23:42:50.392166   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | unable to find current IP address of domain force-systemd-env-429303 in network mk-force-systemd-env-429303 (interfaces detected: [])
	I0926 23:42:50.392189   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | I0926 23:42:50.392152   49836 retry.go:31] will retry after 1.124715923s: waiting for domain to come up
	I0926 23:42:51.519348   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:42:51.520139   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | no network interface addresses found for domain force-systemd-env-429303 (source=lease)
	I0926 23:42:51.520180   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | trying to list again with source=arp
	I0926 23:42:51.520450   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | unable to find current IP address of domain force-systemd-env-429303 in network mk-force-systemd-env-429303 (interfaces detected: [])
	I0926 23:42:51.520478   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | I0926 23:42:51.520435   49836 retry.go:31] will retry after 1.206643322s: waiting for domain to come up
	I0926 23:42:52.729018   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:42:52.729562   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | no network interface addresses found for domain force-systemd-env-429303 (source=lease)
	I0926 23:42:52.729592   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | trying to list again with source=arp
	I0926 23:42:52.729962   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | unable to find current IP address of domain force-systemd-env-429303 in network mk-force-systemd-env-429303 (interfaces detected: [])
	I0926 23:42:52.729991   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | I0926 23:42:52.729936   49836 retry.go:31] will retry after 2.121355284s: waiting for domain to come up
	I0926 23:42:53.003817   48726 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 0eb5d37736bbcd89b18982790b72506387ebcf54266e25bad635b4756357149c 44947fdc81a0b7b0f80270fbf051c9cb1b239434ca02be9c632ac93d614f6b32 1567d2e11655dc909f2f494668cacd1951e28df8f20614765d182ca48a60ecb5 51dc69520ea563551ffb542f8acc0a9060967383c75e5f980c2b4882cd666437 645856b5f963235624cf1b074088fc311ed16a9d7a3aa3d34cb9f5291ea4d996 168bab96e50b2f889b38634530403e8a10ae45bb9fd35cff73d3214501ddcb1c 2b15910803a545e4a869d6f43ce6b53ef32a6a2034fa3c82a31296891c2caa16 4291340e3901ff6ccd5af3a77e5ec33802ebc6e8e947a6db005215772968da3c f967ebe7302f5cf4de8213a6b3a0c7a4436980ecd58e590ba1e4bd41c75d9839: (3.725512685s)
	I0926 23:42:53.003913   48726 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0926 23:42:53.042699   48726 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 23:42:53.056737   48726 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Sep 26 23:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5642 Sep 26 23:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1954 Sep 26 23:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5590 Sep 26 23:41 /etc/kubernetes/scheduler.conf
	
	I0926 23:42:53.056820   48726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 23:42:53.069626   48726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 23:42:53.082275   48726 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0926 23:42:53.082340   48726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 23:42:53.095446   48726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 23:42:53.107190   48726 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0926 23:42:53.107258   48726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 23:42:53.119696   48726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 23:42:53.136739   48726 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0926 23:42:53.136800   48726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 23:42:53.153509   48726 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 23:42:53.167200   48726 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 23:42:53.228781   48726 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 23:42:54.839652   48726 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.610824832s)
	I0926 23:42:54.839743   48726 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0926 23:42:55.130416   48726 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 23:42:55.210245   48726 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0926 23:42:55.317467   48726 api_server.go:52] waiting for apiserver process to appear ...
	I0926 23:42:55.317564   48726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:42:55.818663   48726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:42:51.666079   51192 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 23:42:51.666119   51192 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0926 23:42:51.666126   51192 cache.go:58] Caching tarball of preloaded images
	I0926 23:42:51.666216   51192 preload.go:172] Found /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0926 23:42:51.666224   51192 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0926 23:42:51.666346   51192 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/cert-expiration-648174/config.json ...
	I0926 23:42:51.666363   51192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/cert-expiration-648174/config.json: {Name:mk913b7adc40d2f2db2b9a5b2831fb8b39e6b32c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:42:51.666562   51192 start.go:360] acquireMachinesLock for cert-expiration-648174: {Name:mk2abc374bcfc09d0b998f1b70bb443182c23d46 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 23:42:54.853780   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:42:54.854533   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | no network interface addresses found for domain force-systemd-env-429303 (source=lease)
	I0926 23:42:54.854562   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | trying to list again with source=arp
	I0926 23:42:54.854966   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | unable to find current IP address of domain force-systemd-env-429303 in network mk-force-systemd-env-429303 (interfaces detected: [])
	I0926 23:42:54.855061   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | I0926 23:42:54.854979   49836 retry.go:31] will retry after 1.892985147s: waiting for domain to come up
	I0926 23:42:56.750480   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:42:56.751277   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | no network interface addresses found for domain force-systemd-env-429303 (source=lease)
	I0926 23:42:56.751310   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | trying to list again with source=arp
	I0926 23:42:56.751661   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | unable to find current IP address of domain force-systemd-env-429303 in network mk-force-systemd-env-429303 (interfaces detected: [])
	I0926 23:42:56.751701   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | I0926 23:42:56.751643   49836 retry.go:31] will retry after 3.13275826s: waiting for domain to come up
	I0926 23:42:56.317962   48726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:42:56.352738   48726 api_server.go:72] duration metric: took 1.035265714s to wait for apiserver process to appear ...
	I0926 23:42:56.352772   48726 api_server.go:88] waiting for apiserver healthz status ...
	I0926 23:42:56.352812   48726 api_server.go:253] Checking apiserver healthz at https://192.168.83.242:8443/healthz ...
	I0926 23:42:58.463027   48726 api_server.go:279] https://192.168.83.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0926 23:42:58.463065   48726 api_server.go:103] status: https://192.168.83.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0926 23:42:58.463079   48726 api_server.go:253] Checking apiserver healthz at https://192.168.83.242:8443/healthz ...
	I0926 23:42:58.587800   48726 api_server.go:279] https://192.168.83.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0926 23:42:58.587858   48726 api_server.go:103] status: https://192.168.83.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0926 23:42:58.853385   48726 api_server.go:253] Checking apiserver healthz at https://192.168.83.242:8443/healthz ...
	I0926 23:42:58.858743   48726 api_server.go:279] https://192.168.83.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0926 23:42:58.858772   48726 api_server.go:103] status: https://192.168.83.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0926 23:42:59.353597   48726 api_server.go:253] Checking apiserver healthz at https://192.168.83.242:8443/healthz ...
	I0926 23:42:59.360756   48726 api_server.go:279] https://192.168.83.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0926 23:42:59.360788   48726 api_server.go:103] status: https://192.168.83.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0926 23:42:59.853026   48726 api_server.go:253] Checking apiserver healthz at https://192.168.83.242:8443/healthz ...
	I0926 23:42:59.859472   48726 api_server.go:279] https://192.168.83.242:8443/healthz returned 200:
	ok
	I0926 23:42:59.869995   48726 api_server.go:141] control plane version: v1.34.0
	I0926 23:42:59.870029   48726 api_server.go:131] duration metric: took 3.517248976s to wait for apiserver health ...
	I0926 23:42:59.870041   48726 cni.go:84] Creating CNI manager for ""
	I0926 23:42:59.870049   48726 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 23:42:59.871696   48726 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0926 23:42:59.873432   48726 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 23:42:59.890535   48726 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 23:42:59.928900   48726 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 23:42:59.943092   48726 system_pods.go:59] 6 kube-system pods found
	I0926 23:42:59.943151   48726 system_pods.go:61] "coredns-66bc5c9577-74fdn" [930aa1d0-38cf-4e8b-8d24-e674f37f457b] Running
	I0926 23:42:59.943173   48726 system_pods.go:61] "etcd-pause-298014" [c17c7527-c91c-43e7-9235-c3adfc39cf07] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:42:59.943185   48726 system_pods.go:61] "kube-apiserver-pause-298014" [10f6cbd2-e710-4684-8f3d-c8d0600c81cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:42:59.943203   48726 system_pods.go:61] "kube-controller-manager-pause-298014" [03d5746f-7b12-4665-9dee-c8697d01ad12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:42:59.943216   48726 system_pods.go:61] "kube-proxy-2s884" [eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 23:42:59.943225   48726 system_pods.go:61] "kube-scheduler-pause-298014" [4470d689-950d-45d5-afa4-f66199a4a3b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:42:59.943232   48726 system_pods.go:74] duration metric: took 14.308017ms to wait for pod list to return data ...
	I0926 23:42:59.943247   48726 node_conditions.go:102] verifying NodePressure condition ...
	I0926 23:42:59.948141   48726 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 23:42:59.948184   48726 node_conditions.go:123] node cpu capacity is 2
	I0926 23:42:59.948200   48726 node_conditions.go:105] duration metric: took 4.94767ms to run NodePressure ...
	I0926 23:42:59.948262   48726 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 23:43:00.225956   48726 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I0926 23:43:00.230331   48726 kubeadm.go:743] kubelet initialised
	I0926 23:43:00.230354   48726 kubeadm.go:744] duration metric: took 4.373004ms waiting for restarted kubelet to initialise ...
	I0926 23:43:00.230369   48726 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 23:43:00.247899   48726 ops.go:34] apiserver oom_adj: -16
	I0926 23:43:00.247921   48726 kubeadm.go:601] duration metric: took 11.184528516s to restartPrimaryControlPlane
	I0926 23:43:00.247930   48726 kubeadm.go:402] duration metric: took 11.600654079s to StartCluster
	I0926 23:43:00.247947   48726 settings.go:142] acquiring lock: {Name:mk8a46d5a99d51096f5a73696c8b5f570ce357f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:43:00.248023   48726 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 23:43:00.248768   48726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/kubeconfig: {Name:mkc92bf76d8ba21d0a2b0bb28107401b61549063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:43:00.249048   48726 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.83.242 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 23:43:00.249252   48726 config.go:182] Loaded profile config "pause-298014": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:43:00.249176   48726 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 23:43:00.250661   48726 out.go:179] * Verifying Kubernetes components...
	I0926 23:43:00.250661   48726 out.go:179] * Enabled addons: 
	I0926 23:43:00.251760   48726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:43:00.251804   48726 addons.go:514] duration metric: took 2.647759ms for enable addons: enabled=[]
	I0926 23:43:00.440654   48726 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:43:00.466865   48726 node_ready.go:35] waiting up to 6m0s for node "pause-298014" to be "Ready" ...
	I0926 23:43:00.471036   48726 node_ready.go:49] node "pause-298014" is "Ready"
	I0926 23:43:00.471065   48726 node_ready.go:38] duration metric: took 4.161964ms for node "pause-298014" to be "Ready" ...
	I0926 23:43:00.471080   48726 api_server.go:52] waiting for apiserver process to appear ...
	I0926 23:43:00.471139   48726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:43:00.496807   48726 api_server.go:72] duration metric: took 247.718701ms to wait for apiserver process to appear ...
	I0926 23:43:00.496852   48726 api_server.go:88] waiting for apiserver healthz status ...
	I0926 23:43:00.496873   48726 api_server.go:253] Checking apiserver healthz at https://192.168.83.242:8443/healthz ...
	I0926 23:43:00.501482   48726 api_server.go:279] https://192.168.83.242:8443/healthz returned 200:
	ok
	I0926 23:43:00.502498   48726 api_server.go:141] control plane version: v1.34.0
	I0926 23:43:00.502518   48726 api_server.go:131] duration metric: took 5.659732ms to wait for apiserver health ...
	I0926 23:43:00.502527   48726 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 23:43:00.505340   48726 system_pods.go:59] 6 kube-system pods found
	I0926 23:43:00.505362   48726 system_pods.go:61] "coredns-66bc5c9577-74fdn" [930aa1d0-38cf-4e8b-8d24-e674f37f457b] Running
	I0926 23:43:00.505370   48726 system_pods.go:61] "etcd-pause-298014" [c17c7527-c91c-43e7-9235-c3adfc39cf07] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:43:00.505376   48726 system_pods.go:61] "kube-apiserver-pause-298014" [10f6cbd2-e710-4684-8f3d-c8d0600c81cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:43:00.505382   48726 system_pods.go:61] "kube-controller-manager-pause-298014" [03d5746f-7b12-4665-9dee-c8697d01ad12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:43:00.505386   48726 system_pods.go:61] "kube-proxy-2s884" [eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43] Running
	I0926 23:43:00.505391   48726 system_pods.go:61] "kube-scheduler-pause-298014" [4470d689-950d-45d5-afa4-f66199a4a3b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:43:00.505420   48726 system_pods.go:74] duration metric: took 2.887105ms to wait for pod list to return data ...
	I0926 23:43:00.505427   48726 default_sa.go:34] waiting for default service account to be created ...
	I0926 23:43:00.508349   48726 default_sa.go:45] found service account: "default"
	I0926 23:43:00.508368   48726 default_sa.go:55] duration metric: took 2.934618ms for default service account to be created ...
	I0926 23:43:00.508376   48726 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 23:43:00.511652   48726 system_pods.go:86] 6 kube-system pods found
	I0926 23:43:00.511669   48726 system_pods.go:89] "coredns-66bc5c9577-74fdn" [930aa1d0-38cf-4e8b-8d24-e674f37f457b] Running
	I0926 23:43:00.511676   48726 system_pods.go:89] "etcd-pause-298014" [c17c7527-c91c-43e7-9235-c3adfc39cf07] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:43:00.511683   48726 system_pods.go:89] "kube-apiserver-pause-298014" [10f6cbd2-e710-4684-8f3d-c8d0600c81cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:43:00.511692   48726 system_pods.go:89] "kube-controller-manager-pause-298014" [03d5746f-7b12-4665-9dee-c8697d01ad12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:43:00.511700   48726 system_pods.go:89] "kube-proxy-2s884" [eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43] Running
	I0926 23:43:00.511706   48726 system_pods.go:89] "kube-scheduler-pause-298014" [4470d689-950d-45d5-afa4-f66199a4a3b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:43:00.511713   48726 system_pods.go:126] duration metric: took 3.331612ms to wait for k8s-apps to be running ...
	I0926 23:43:00.511722   48726 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 23:43:00.511762   48726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:43:00.529608   48726 system_svc.go:56] duration metric: took 17.875479ms WaitForService to wait for kubelet
	I0926 23:43:00.529643   48726 kubeadm.go:586] duration metric: took 280.560106ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 23:43:00.529664   48726 node_conditions.go:102] verifying NodePressure condition ...
	I0926 23:43:00.533649   48726 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 23:43:00.533672   48726 node_conditions.go:123] node cpu capacity is 2
	I0926 23:43:00.533684   48726 node_conditions.go:105] duration metric: took 4.014326ms to run NodePressure ...
	I0926 23:43:00.533697   48726 start.go:241] waiting for startup goroutines ...
	I0926 23:43:00.533707   48726 start.go:246] waiting for cluster config update ...
	I0926 23:43:00.533717   48726 start.go:255] writing updated cluster config ...
	I0926 23:43:00.534083   48726 ssh_runner.go:195] Run: rm -f paused
	I0926 23:43:00.539756   48726 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:43:00.540270   48726 kapi.go:59] client config for pause-298014: &rest.Config{Host:"https://192.168.83.242:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21642-6020/.minikube/profiles/pause-298014/client.crt", KeyFile:"/home/jenkins/minikube-integration/21642-6020/.minikube/profiles/pause-298014/client.key", CAFile:"/home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 23:43:00.543908   48726 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-74fdn" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:43:00.548696   48726 pod_ready.go:94] pod "coredns-66bc5c9577-74fdn" is "Ready"
	I0926 23:43:00.548716   48726 pod_ready.go:86] duration metric: took 4.786672ms for pod "coredns-66bc5c9577-74fdn" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:43:00.550610   48726 pod_ready.go:83] waiting for pod "etcd-pause-298014" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:42:59.886160   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:42:59.886846   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | no network interface addresses found for domain force-systemd-env-429303 (source=lease)
	I0926 23:42:59.886883   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | trying to list again with source=arp
	I0926 23:42:59.887243   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | unable to find current IP address of domain force-systemd-env-429303 in network mk-force-systemd-env-429303 (interfaces detected: [])
	I0926 23:42:59.887278   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | I0926 23:42:59.887172   49836 retry.go:31] will retry after 3.300788257s: waiting for domain to come up
	I0926 23:43:04.910220   50469 start.go:364] duration metric: took 17.464721825s to acquireMachinesLock for "stopped-upgrade-217447"
	I0926 23:43:04.910295   50469 start.go:96] Skipping create...Using existing machine configuration
	I0926 23:43:04.910306   50469 fix.go:54] fixHost starting: 
	I0926 23:43:04.910737   50469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:43:04.910790   50469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:43:04.929859   50469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46811
	I0926 23:43:04.930352   50469 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:43:04.930916   50469 main.go:141] libmachine: Using API Version  1
	I0926 23:43:04.930945   50469 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:43:04.931363   50469 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:43:04.931585   50469 main.go:141] libmachine: (stopped-upgrade-217447) Calling .DriverName
	I0926 23:43:04.931756   50469 main.go:141] libmachine: (stopped-upgrade-217447) Calling .GetState
	I0926 23:43:04.933940   50469 fix.go:112] recreateIfNeeded on stopped-upgrade-217447: state=Stopped err=<nil>
	I0926 23:43:04.933999   50469 main.go:141] libmachine: (stopped-upgrade-217447) Calling .DriverName
	W0926 23:43:04.934188   50469 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 23:43:01.561195   48726 pod_ready.go:94] pod "etcd-pause-298014" is "Ready"
	I0926 23:43:01.561224   48726 pod_ready.go:86] duration metric: took 1.010597472s for pod "etcd-pause-298014" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:43:01.565507   48726 pod_ready.go:83] waiting for pod "kube-apiserver-pause-298014" in "kube-system" namespace to be "Ready" or be gone ...
	W0926 23:43:03.573013   48726 pod_ready.go:104] pod "kube-apiserver-pause-298014" is not "Ready", error: <nil>
	W0926 23:43:05.574761   48726 pod_ready.go:104] pod "kube-apiserver-pause-298014" is not "Ready", error: <nil>
	I0926 23:43:03.189365   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.190116   48840 main.go:141] libmachine: (force-systemd-env-429303) found domain IP: 192.168.39.231
	I0926 23:43:03.190147   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has current primary IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.190153   48840 main.go:141] libmachine: (force-systemd-env-429303) reserving static IP address...
	I0926 23:43:03.190578   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | unable to find host DHCP lease matching {name: "force-systemd-env-429303", mac: "52:54:00:da:63:d4", ip: "192.168.39.231"} in network mk-force-systemd-env-429303
	I0926 23:43:03.410348   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | Getting to WaitForSSH function...
	I0926 23:43:03.410376   48840 main.go:141] libmachine: (force-systemd-env-429303) reserved static IP address 192.168.39.231 for domain force-systemd-env-429303
	I0926 23:43:03.410391   48840 main.go:141] libmachine: (force-systemd-env-429303) waiting for SSH...
	I0926 23:43:03.413496   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.414106   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:minikube Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:03.414144   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.414283   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | Using SSH client type: external
	I0926 23:43:03.414329   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | Using SSH private key: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/force-systemd-env-429303/id_rsa (-rw-------)
	I0926 23:43:03.414376   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21642-6020/.minikube/machines/force-systemd-env-429303/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0926 23:43:03.414394   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | About to run SSH command:
	I0926 23:43:03.414410   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | exit 0
	I0926 23:43:03.549698   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | SSH cmd err, output: <nil>: 
	I0926 23:43:03.550069   48840 main.go:141] libmachine: (force-systemd-env-429303) domain creation complete
	I0926 23:43:03.550401   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetConfigRaw
	I0926 23:43:03.551049   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .DriverName
	I0926 23:43:03.551260   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .DriverName
	I0926 23:43:03.551388   48840 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0926 23:43:03.551403   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetState
	I0926 23:43:03.553014   48840 main.go:141] libmachine: Detecting operating system of created instance...
	I0926 23:43:03.553032   48840 main.go:141] libmachine: Waiting for SSH to be available...
	I0926 23:43:03.553040   48840 main.go:141] libmachine: Getting to WaitForSSH function...
	I0926 23:43:03.553048   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:03.556420   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.556877   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:03.556905   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.557097   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHPort
	I0926 23:43:03.557278   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:03.557396   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:03.557515   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHUsername
	I0926 23:43:03.557701   48840 main.go:141] libmachine: Using SSH client type: native
	I0926 23:43:03.557977   48840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0926 23:43:03.557989   48840 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0926 23:43:03.675033   48840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 23:43:03.675073   48840 main.go:141] libmachine: Detecting the provisioner...
	I0926 23:43:03.675083   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:03.678605   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.679083   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:03.679122   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.679252   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHPort
	I0926 23:43:03.679453   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:03.679594   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:03.679698   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHUsername
	I0926 23:43:03.679849   48840 main.go:141] libmachine: Using SSH client type: native
	I0926 23:43:03.680126   48840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0926 23:43:03.680141   48840 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0926 23:43:03.791667   48840 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0926 23:43:03.791747   48840 main.go:141] libmachine: found compatible host: buildroot
	I0926 23:43:03.791756   48840 main.go:141] libmachine: Provisioning with buildroot...
	I0926 23:43:03.791764   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetMachineName
	I0926 23:43:03.792067   48840 buildroot.go:166] provisioning hostname "force-systemd-env-429303"
	I0926 23:43:03.792094   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetMachineName
	I0926 23:43:03.792312   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:03.795468   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.795914   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:03.795952   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.796124   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHPort
	I0926 23:43:03.796315   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:03.796493   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:03.796671   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHUsername
	I0926 23:43:03.796875   48840 main.go:141] libmachine: Using SSH client type: native
	I0926 23:43:03.797077   48840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0926 23:43:03.797089   48840 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-429303 && echo "force-systemd-env-429303" | sudo tee /etc/hostname
	I0926 23:43:03.934728   48840 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-429303
	
	I0926 23:43:03.934762   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:03.938173   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.938568   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:03.938603   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:03.938792   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHPort
	I0926 23:43:03.939047   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:03.939229   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:03.939331   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHUsername
	I0926 23:43:03.939477   48840 main.go:141] libmachine: Using SSH client type: native
	I0926 23:43:03.939712   48840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0926 23:43:03.939731   48840 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-429303' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-429303/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-429303' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 23:43:04.063874   48840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 23:43:04.063907   48840 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21642-6020/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-6020/.minikube}
	I0926 23:43:04.063978   48840 buildroot.go:174] setting up certificates
	I0926 23:43:04.063993   48840 provision.go:84] configureAuth start
	I0926 23:43:04.064013   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetMachineName
	I0926 23:43:04.064361   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetIP
	I0926 23:43:04.067749   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.068342   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.068373   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.068538   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:04.072714   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.073222   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.073255   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.073417   48840 provision.go:143] copyHostCerts
	I0926 23:43:04.073455   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem
	I0926 23:43:04.073484   48840 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem, removing ...
	I0926 23:43:04.073498   48840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem
	I0926 23:43:04.073563   48840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem (1082 bytes)
	I0926 23:43:04.073641   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem
	I0926 23:43:04.073659   48840 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem, removing ...
	I0926 23:43:04.073665   48840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem
	I0926 23:43:04.073694   48840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem (1123 bytes)
	I0926 23:43:04.073759   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem
	I0926 23:43:04.073787   48840 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem, removing ...
	I0926 23:43:04.073797   48840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem
	I0926 23:43:04.073862   48840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem (1675 bytes)
	I0926 23:43:04.073927   48840 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-429303 san=[127.0.0.1 192.168.39.231 force-systemd-env-429303 localhost minikube]
	I0926 23:43:04.175697   48840 provision.go:177] copyRemoteCerts
	I0926 23:43:04.175753   48840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 23:43:04.175775   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:04.178943   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.179344   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.179385   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.179578   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHPort
	I0926 23:43:04.179800   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:04.179971   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHUsername
	I0926 23:43:04.180127   48840 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/force-systemd-env-429303/id_rsa Username:docker}
	I0926 23:43:04.266933   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0926 23:43:04.267025   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 23:43:04.299365   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0926 23:43:04.299469   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0926 23:43:04.330135   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0926 23:43:04.330208   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 23:43:04.361981   48840 provision.go:87] duration metric: took 297.969684ms to configureAuth
	I0926 23:43:04.362019   48840 buildroot.go:189] setting minikube options for container-runtime
	I0926 23:43:04.362187   48840 config.go:182] Loaded profile config "force-systemd-env-429303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:43:04.362277   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:04.365427   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.365762   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.365786   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.366023   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHPort
	I0926 23:43:04.366247   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:04.366430   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:04.366592   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHUsername
	I0926 23:43:04.366757   48840 main.go:141] libmachine: Using SSH client type: native
	I0926 23:43:04.367000   48840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0926 23:43:04.367016   48840 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0926 23:43:04.638676   48840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0926 23:43:04.638713   48840 main.go:141] libmachine: Checking connection to Docker...
	I0926 23:43:04.638725   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetURL
	I0926 23:43:04.640227   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | using libvirt version 8000000
	I0926 23:43:04.642959   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.643395   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.643431   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.643659   48840 main.go:141] libmachine: Docker is up and running!
	I0926 23:43:04.643678   48840 main.go:141] libmachine: Reticulating splines...
	I0926 23:43:04.643686   48840 client.go:171] duration metric: took 20.326455404s to LocalClient.Create
	I0926 23:43:04.643709   48840 start.go:167] duration metric: took 20.326557837s to libmachine.API.Create "force-systemd-env-429303"
	I0926 23:43:04.643719   48840 start.go:293] postStartSetup for "force-systemd-env-429303" (driver="kvm2")
	I0926 23:43:04.643728   48840 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 23:43:04.643744   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .DriverName
	I0926 23:43:04.644062   48840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 23:43:04.644090   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:04.646895   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.647324   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.647358   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.647544   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHPort
	I0926 23:43:04.647768   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:04.647970   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHUsername
	I0926 23:43:04.648124   48840 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/force-systemd-env-429303/id_rsa Username:docker}
	I0926 23:43:04.737457   48840 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 23:43:04.742895   48840 info.go:137] Remote host: Buildroot 2025.02
	I0926 23:43:04.742926   48840 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-6020/.minikube/addons for local assets ...
	I0926 23:43:04.743005   48840 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-6020/.minikube/files for local assets ...
	I0926 23:43:04.743114   48840 filesync.go:149] local asset: /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem -> 99142.pem in /etc/ssl/certs
	I0926 23:43:04.743126   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem -> /etc/ssl/certs/99142.pem
	I0926 23:43:04.743230   48840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 23:43:04.756887   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem --> /etc/ssl/certs/99142.pem (1708 bytes)
	I0926 23:43:04.788999   48840 start.go:296] duration metric: took 145.266169ms for postStartSetup
	I0926 23:43:04.789057   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetConfigRaw
	I0926 23:43:04.789714   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetIP
	I0926 23:43:04.792736   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.793181   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.793231   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.793659   48840 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/config.json ...
	I0926 23:43:04.793946   48840 start.go:128] duration metric: took 20.50096342s to createHost
	I0926 23:43:04.793977   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:04.796522   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.796952   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.796995   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.797199   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHPort
	I0926 23:43:04.797397   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:04.797593   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:04.797728   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHUsername
	I0926 23:43:04.797880   48840 main.go:141] libmachine: Using SSH client type: native
	I0926 23:43:04.798150   48840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0926 23:43:04.798163   48840 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 23:43:04.909961   48840 main.go:141] libmachine: SSH cmd err, output: <nil>: 1758930184.871753731
	
	I0926 23:43:04.909987   48840 fix.go:216] guest clock: 1758930184.871753731
	I0926 23:43:04.909994   48840 fix.go:229] Guest: 2025-09-26 23:43:04.871753731 +0000 UTC Remote: 2025-09-26 23:43:04.793961367 +0000 UTC m=+31.892863571 (delta=77.792364ms)
	I0926 23:43:04.910035   48840 fix.go:200] guest clock delta is within tolerance: 77.792364ms
	I0926 23:43:04.910046   48840 start.go:83] releasing machines lock for "force-systemd-env-429303", held for 20.61733579s
	I0926 23:43:04.910077   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .DriverName
	I0926 23:43:04.910394   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetIP
	I0926 23:43:04.914147   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.914605   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.914643   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.914821   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .DriverName
	I0926 23:43:04.915341   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .DriverName
	I0926 23:43:04.915553   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .DriverName
	I0926 23:43:04.915651   48840 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 23:43:04.915699   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:04.915736   48840 ssh_runner.go:195] Run: cat /version.json
	I0926 23:43:04.915755   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHHostname
	I0926 23:43:04.919592   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.919859   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.920052   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.920076   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.920312   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHPort
	I0926 23:43:04.920497   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:04.920527   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:04.920533   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:04.920708   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHPort
	I0926 23:43:04.920787   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHUsername
	I0926 23:43:04.920939   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHKeyPath
	I0926 23:43:04.921116   48840 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/force-systemd-env-429303/id_rsa Username:docker}
	I0926 23:43:04.921126   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetSSHUsername
	I0926 23:43:04.921295   48840 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/force-systemd-env-429303/id_rsa Username:docker}
	I0926 23:43:05.028442   48840 ssh_runner.go:195] Run: systemctl --version
	I0926 23:43:05.035789   48840 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0926 23:43:05.196186   48840 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 23:43:05.203966   48840 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 23:43:05.204059   48840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 23:43:05.226700   48840 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 23:43:05.226724   48840 start.go:495] detecting cgroup driver to use...
	I0926 23:43:05.226741   48840 start.go:499] using "systemd" cgroup driver as enforced via flags
	I0926 23:43:05.226817   48840 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 23:43:05.249131   48840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 23:43:05.268336   48840 docker.go:218] disabling cri-docker service (if available) ...
	I0926 23:43:05.268393   48840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0926 23:43:05.288488   48840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0926 23:43:05.308299   48840 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0926 23:43:05.482350   48840 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0926 23:43:05.698507   48840 docker.go:234] disabling docker service ...
	I0926 23:43:05.698607   48840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0926 23:43:05.718099   48840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0926 23:43:05.734927   48840 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0926 23:43:05.919711   48840 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0926 23:43:06.075152   48840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 23:43:06.096420   48840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 23:43:06.122184   48840 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0926 23:43:06.122254   48840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:43:06.136979   48840 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0926 23:43:06.137057   48840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:43:06.151790   48840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:43:06.165782   48840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:43:06.179764   48840 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 23:43:06.194516   48840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:43:06.208942   48840 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:43:06.233662   48840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:43:06.248701   48840 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 23:43:06.260787   48840 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 23:43:06.260860   48840 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 23:43:06.293269   48840 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 23:43:06.307707   48840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:43:06.484359   48840 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0926 23:43:06.614334   48840 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0926 23:43:06.614436   48840 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0926 23:43:06.620669   48840 start.go:563] Will wait 60s for crictl version
	I0926 23:43:06.620729   48840 ssh_runner.go:195] Run: which crictl
	I0926 23:43:06.625742   48840 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 23:43:06.674208   48840 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0926 23:43:06.674304   48840 ssh_runner.go:195] Run: crio --version
	I0926 23:43:06.717417   48840 ssh_runner.go:195] Run: crio --version
	I0926 23:43:06.753543   48840 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0926 23:43:04.936120   50469 out.go:252] * Restarting existing kvm2 VM for "stopped-upgrade-217447" ...
	I0926 23:43:04.936162   50469 main.go:141] libmachine: (stopped-upgrade-217447) Calling .Start
	I0926 23:43:04.936345   50469 main.go:141] libmachine: (stopped-upgrade-217447) starting domain...
	I0926 23:43:04.936369   50469 main.go:141] libmachine: (stopped-upgrade-217447) ensuring networks are active...
	I0926 23:43:04.937244   50469 main.go:141] libmachine: (stopped-upgrade-217447) Ensuring network default is active
	I0926 23:43:04.937717   50469 main.go:141] libmachine: (stopped-upgrade-217447) Ensuring network mk-stopped-upgrade-217447 is active
	I0926 23:43:04.938223   50469 main.go:141] libmachine: (stopped-upgrade-217447) getting domain XML...
	I0926 23:43:04.939419   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | starting domain XML:
	I0926 23:43:04.939441   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | <domain type='kvm'>
	I0926 23:43:04.939463   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <name>stopped-upgrade-217447</name>
	I0926 23:43:04.939478   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <uuid>00d11e93-9dcc-4733-9dcb-a852ca715ee7</uuid>
	I0926 23:43:04.939499   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <memory unit='KiB'>3145728</memory>
	I0926 23:43:04.939508   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I0926 23:43:04.939516   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <vcpu placement='static'>2</vcpu>
	I0926 23:43:04.939549   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <os>
	I0926 23:43:04.939592   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0926 23:43:04.939619   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <boot dev='cdrom'/>
	I0926 23:43:04.939641   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <boot dev='hd'/>
	I0926 23:43:04.939656   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <bootmenu enable='no'/>
	I0926 23:43:04.939665   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   </os>
	I0926 23:43:04.939679   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <features>
	I0926 23:43:04.939691   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <acpi/>
	I0926 23:43:04.939699   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <apic/>
	I0926 23:43:04.939710   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <pae/>
	I0926 23:43:04.939717   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   </features>
	I0926 23:43:04.939731   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0926 23:43:04.939741   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <clock offset='utc'/>
	I0926 23:43:04.939751   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <on_poweroff>destroy</on_poweroff>
	I0926 23:43:04.939769   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <on_reboot>restart</on_reboot>
	I0926 23:43:04.939790   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <on_crash>destroy</on_crash>
	I0926 23:43:04.939803   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   <devices>
	I0926 23:43:04.939844   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0926 23:43:04.939857   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <disk type='file' device='cdrom'>
	I0926 23:43:04.939877   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <driver name='qemu' type='raw'/>
	I0926 23:43:04.939896   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/stopped-upgrade-217447/boot2docker.iso'/>
	I0926 23:43:04.939904   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <target dev='hdc' bus='scsi'/>
	I0926 23:43:04.939911   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <readonly/>
	I0926 23:43:04.939928   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0926 23:43:04.939937   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     </disk>
	I0926 23:43:04.939945   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <disk type='file' device='disk'>
	I0926 23:43:04.939957   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0926 23:43:04.939973   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/stopped-upgrade-217447/stopped-upgrade-217447.rawdisk'/>
	I0926 23:43:04.939984   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <target dev='hda' bus='virtio'/>
	I0926 23:43:04.939999   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0926 23:43:04.940009   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     </disk>
	I0926 23:43:04.940053   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0926 23:43:04.940084   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0926 23:43:04.940098   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     </controller>
	I0926 23:43:04.940111   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0926 23:43:04.940124   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0926 23:43:04.940134   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0926 23:43:04.940143   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     </controller>
	I0926 23:43:04.940156   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <interface type='network'>
	I0926 23:43:04.940167   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <mac address='52:54:00:b4:98:22'/>
	I0926 23:43:04.940178   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <source network='mk-stopped-upgrade-217447'/>
	I0926 23:43:04.940189   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <model type='virtio'/>
	I0926 23:43:04.940202   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0926 23:43:04.940217   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     </interface>
	I0926 23:43:04.940227   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <interface type='network'>
	I0926 23:43:04.940242   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <mac address='52:54:00:a3:44:26'/>
	I0926 23:43:04.940254   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <source network='default'/>
	I0926 23:43:04.940263   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <model type='virtio'/>
	I0926 23:43:04.940278   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0926 23:43:04.940291   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     </interface>
	I0926 23:43:04.940301   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <serial type='pty'>
	I0926 23:43:04.940320   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <target type='isa-serial' port='0'>
	I0926 23:43:04.940330   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |         <model name='isa-serial'/>
	I0926 23:43:04.940339   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       </target>
	I0926 23:43:04.940347   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     </serial>
	I0926 23:43:04.940358   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <console type='pty'>
	I0926 23:43:04.940370   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <target type='serial' port='0'/>
	I0926 23:43:04.940380   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     </console>
	I0926 23:43:04.940390   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <input type='mouse' bus='ps2'/>
	I0926 23:43:04.940397   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <input type='keyboard' bus='ps2'/>
	I0926 23:43:04.940419   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <audio id='1' type='none'/>
	I0926 23:43:04.940439   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <memballoon model='virtio'>
	I0926 23:43:04.940455   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0926 23:43:04.940465   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     </memballoon>
	I0926 23:43:04.940488   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     <rng model='virtio'>
	I0926 23:43:04.940505   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <backend model='random'>/dev/random</backend>
	I0926 23:43:04.940517   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0926 23:43:04.940526   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |     </rng>
	I0926 23:43:04.940535   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG |   </devices>
	I0926 23:43:04.940545   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | </domain>
	I0926 23:43:04.940556   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | 
	I0926 23:43:06.406079   50469 main.go:141] libmachine: (stopped-upgrade-217447) waiting for domain to start...
	I0926 23:43:06.407760   50469 main.go:141] libmachine: (stopped-upgrade-217447) domain is now running
	I0926 23:43:06.407786   50469 main.go:141] libmachine: (stopped-upgrade-217447) waiting for IP...
	I0926 23:43:06.408879   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | domain stopped-upgrade-217447 has defined MAC address 52:54:00:b4:98:22 in network mk-stopped-upgrade-217447
	I0926 23:43:06.409440   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | domain stopped-upgrade-217447 has current primary IP address 192.168.61.82 and MAC address 52:54:00:b4:98:22 in network mk-stopped-upgrade-217447
	I0926 23:43:06.409475   50469 main.go:141] libmachine: (stopped-upgrade-217447) found domain IP: 192.168.61.82
	I0926 23:43:06.409508   50469 main.go:141] libmachine: (stopped-upgrade-217447) reserving static IP address...
	I0926 23:43:06.409957   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | found host DHCP lease matching {name: "stopped-upgrade-217447", mac: "52:54:00:b4:98:22", ip: "192.168.61.82"} in network mk-stopped-upgrade-217447: {Iface:virbr4 ExpiryTime:2025-09-27 00:42:17 +0000 UTC Type:0 Mac:52:54:00:b4:98:22 Iaid: IPaddr:192.168.61.82 Prefix:24 Hostname:stopped-upgrade-217447 Clientid:01:52:54:00:b4:98:22}
	I0926 23:43:06.409990   50469 main.go:141] libmachine: (stopped-upgrade-217447) reserved static IP address 192.168.61.82 for domain stopped-upgrade-217447
	I0926 23:43:06.410013   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | skip adding static IP to network mk-stopped-upgrade-217447 - found existing host DHCP lease matching {name: "stopped-upgrade-217447", mac: "52:54:00:b4:98:22", ip: "192.168.61.82"}
	I0926 23:43:06.410033   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | Getting to WaitForSSH function...
	I0926 23:43:06.410074   50469 main.go:141] libmachine: (stopped-upgrade-217447) waiting for SSH...
	I0926 23:43:06.412583   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | domain stopped-upgrade-217447 has defined MAC address 52:54:00:b4:98:22 in network mk-stopped-upgrade-217447
	I0926 23:43:06.413006   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:98:22", ip: ""} in network mk-stopped-upgrade-217447: {Iface:virbr4 ExpiryTime:2025-09-27 00:42:17 +0000 UTC Type:0 Mac:52:54:00:b4:98:22 Iaid: IPaddr:192.168.61.82 Prefix:24 Hostname:stopped-upgrade-217447 Clientid:01:52:54:00:b4:98:22}
	I0926 23:43:06.413046   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | domain stopped-upgrade-217447 has defined IP address 192.168.61.82 and MAC address 52:54:00:b4:98:22 in network mk-stopped-upgrade-217447
	I0926 23:43:06.413227   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | Using SSH client type: external
	I0926 23:43:06.413262   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | Using SSH private key: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/stopped-upgrade-217447/id_rsa (-rw-------)
	I0926 23:43:06.413297   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21642-6020/.minikube/machines/stopped-upgrade-217447/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0926 23:43:06.413310   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | About to run SSH command:
	I0926 23:43:06.413327   50469 main.go:141] libmachine: (stopped-upgrade-217447) DBG | exit 0
	I0926 23:43:06.754941   48840 main.go:141] libmachine: (force-systemd-env-429303) Calling .GetIP
	I0926 23:43:06.758802   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:06.759298   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:63:d4", ip: ""} in network mk-force-systemd-env-429303: {Iface:virbr1 ExpiryTime:2025-09-27 00:43:01 +0000 UTC Type:0 Mac:52:54:00:da:63:d4 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:force-systemd-env-429303 Clientid:01:52:54:00:da:63:d4}
	I0926 23:43:06.759329   48840 main.go:141] libmachine: (force-systemd-env-429303) DBG | domain force-systemd-env-429303 has defined IP address 192.168.39.231 and MAC address 52:54:00:da:63:d4 in network mk-force-systemd-env-429303
	I0926 23:43:06.759731   48840 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0926 23:43:06.765271   48840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 23:43:06.786414   48840 kubeadm.go:883] updating cluster {Name:force-systemd-env-429303 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName
:force-systemd-env-429303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 23:43:06.786558   48840 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 23:43:06.786637   48840 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:43:06.838127   48840 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0926 23:43:06.838217   48840 ssh_runner.go:195] Run: which lz4
	I0926 23:43:06.844709   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0926 23:43:06.844815   48840 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0926 23:43:06.850748   48840 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0926 23:43:06.850790   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	W0926 23:43:08.074177   48726 pod_ready.go:104] pod "kube-apiserver-pause-298014" is not "Ready", error: <nil>
	W0926 23:43:10.075004   48726 pod_ready.go:104] pod "kube-apiserver-pause-298014" is not "Ready", error: <nil>
	I0926 23:43:10.573229   48726 pod_ready.go:94] pod "kube-apiserver-pause-298014" is "Ready"
	I0926 23:43:10.573262   48726 pod_ready.go:86] duration metric: took 9.007726914s for pod "kube-apiserver-pause-298014" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:43:10.576521   48726 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-298014" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:43:08.726992   48840 crio.go:462] duration metric: took 1.882207333s to copy over tarball
	I0926 23:43:08.727099   48840 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0926 23:43:10.477914   48840 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.750773882s)
	I0926 23:43:10.477970   48840 crio.go:469] duration metric: took 1.750916175s to extract the tarball
	I0926 23:43:10.477981   48840 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0926 23:43:10.525930   48840 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:43:10.575011   48840 crio.go:514] all images are preloaded for cri-o runtime.
	I0926 23:43:10.575032   48840 cache_images.go:85] Images are preloaded, skipping loading
	I0926 23:43:10.575040   48840 kubeadm.go:934] updating node { 192.168.39.231 8443 v1.34.0 crio true true} ...
	I0926 23:43:10.575145   48840 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-429303 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:force-systemd-env-429303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 23:43:10.575221   48840 ssh_runner.go:195] Run: crio config
	I0926 23:43:10.632023   48840 cni.go:84] Creating CNI manager for ""
	I0926 23:43:10.632049   48840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 23:43:10.632069   48840 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 23:43:10.632097   48840 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.231 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-429303 NodeName:force-systemd-env-429303 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.231 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 23:43:10.632307   48840 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-429303"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.231"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.231"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 23:43:10.632381   48840 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0926 23:43:10.648055   48840 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 23:43:10.648141   48840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 23:43:10.663715   48840 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0926 23:43:10.690972   48840 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 23:43:10.717106   48840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I0926 23:43:10.752087   48840 ssh_runner.go:195] Run: grep 192.168.39.231	control-plane.minikube.internal$ /etc/hosts
	I0926 23:43:10.756882   48840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.231	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 23:43:10.775153   48840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:43:10.920945   48840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:43:10.942055   48840 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303 for IP: 192.168.39.231
	I0926 23:43:10.942083   48840 certs.go:195] generating shared ca certs ...
	I0926 23:43:10.942103   48840 certs.go:227] acquiring lock for ca certs: {Name:mk9e164f84dd227cf84a459eec91beae2bb75a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:43:10.942287   48840 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key
	I0926 23:43:10.942357   48840 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key
	I0926 23:43:10.942373   48840 certs.go:257] generating profile certs ...
	I0926 23:43:10.942470   48840 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/client.key
	I0926 23:43:10.942493   48840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/client.crt with IP's: []
	I0926 23:43:11.281065   48840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/client.crt ...
	I0926 23:43:11.281095   48840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/client.crt: {Name:mkde6d31cac26c55d88ad9c54eb2eb8be9c111cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:43:11.281260   48840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/client.key ...
	I0926 23:43:11.281273   48840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/client.key: {Name:mkf12a393dbb914d629bca27601d32e142c49271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:43:11.281363   48840 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.key.bda3842d
	I0926 23:43:11.281380   48840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.crt.bda3842d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.231]
	I0926 23:43:11.603425   48840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.crt.bda3842d ...
	I0926 23:43:11.603456   48840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.crt.bda3842d: {Name:mk67fd2898e3dcb39466e4e0060b8bc203034709 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:43:11.603662   48840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.key.bda3842d ...
	I0926 23:43:11.603683   48840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.key.bda3842d: {Name:mkbeb17ba68d0b7c83f919216d84bdcf58042d4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:43:11.603819   48840 certs.go:382] copying /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.crt.bda3842d -> /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.crt
	I0926 23:43:11.603951   48840 certs.go:386] copying /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.key.bda3842d -> /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.key
	I0926 23:43:11.604038   48840 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/proxy-client.key
	I0926 23:43:11.604060   48840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/proxy-client.crt with IP's: []
	I0926 23:43:12.051358   48840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/proxy-client.crt ...
	I0926 23:43:12.051390   48840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/proxy-client.crt: {Name:mk85de6de2701a822e468e4b010d87cc631396d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:43:12.051604   48840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/proxy-client.key ...
	I0926 23:43:12.051633   48840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/proxy-client.key: {Name:mk14b40c7a773659ebc4c7a4f66c7a4056eaaf9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:43:12.051750   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0926 23:43:12.051774   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0926 23:43:12.051792   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0926 23:43:12.051810   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0926 23:43:12.051843   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0926 23:43:12.051869   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0926 23:43:12.051891   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0926 23:43:12.051905   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0926 23:43:12.051987   48840 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914.pem (1338 bytes)
	W0926 23:43:12.052026   48840 certs.go:480] ignoring /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914_empty.pem, impossibly tiny 0 bytes
	I0926 23:43:12.052033   48840 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem (1679 bytes)
	I0926 23:43:12.052056   48840 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem (1082 bytes)
	I0926 23:43:12.052079   48840 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem (1123 bytes)
	I0926 23:43:12.052103   48840 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem (1675 bytes)
	I0926 23:43:12.052139   48840 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem (1708 bytes)
	I0926 23:43:12.052164   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem -> /usr/share/ca-certificates/99142.pem
	I0926 23:43:12.052178   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:43:12.052190   48840 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914.pem -> /usr/share/ca-certificates/9914.pem
	I0926 23:43:12.052730   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 23:43:12.089540   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0926 23:43:12.125737   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 23:43:12.161325   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 23:43:12.196071   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0926 23:43:12.231304   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0926 23:43:12.264027   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 23:43:12.301088   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/force-systemd-env-429303/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 23:43:12.333473   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem --> /usr/share/ca-certificates/99142.pem (1708 bytes)
	I0926 23:43:12.367223   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 23:43:12.398596   48840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914.pem --> /usr/share/ca-certificates/9914.pem (1338 bytes)
	I0926 23:43:12.430079   48840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 23:43:12.453473   48840 ssh_runner.go:195] Run: openssl version
	I0926 23:43:12.461289   48840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99142.pem && ln -fs /usr/share/ca-certificates/99142.pem /etc/ssl/certs/99142.pem"
	I0926 23:43:12.483331   48840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99142.pem
	I0926 23:43:12.491960   48840 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 26 22:43 /usr/share/ca-certificates/99142.pem
	I0926 23:43:12.492039   48840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99142.pem
	I0926 23:43:12.500509   48840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99142.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 23:43:12.516773   48840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 23:43:12.532142   48840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:43:12.538355   48840 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:43:12.538417   48840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:43:12.547264   48840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 23:43:12.566940   48840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9914.pem && ln -fs /usr/share/ca-certificates/9914.pem /etc/ssl/certs/9914.pem"
	I0926 23:43:12.585141   48840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9914.pem
	I0926 23:43:12.592905   48840 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 26 22:43 /usr/share/ca-certificates/9914.pem
	I0926 23:43:12.592979   48840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9914.pem
	I0926 23:43:12.602189   48840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9914.pem /etc/ssl/certs/51391683.0"
	I0926 23:43:12.619074   48840 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 23:43:12.625727   48840 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 23:43:12.625790   48840 kubeadm.go:400] StartCluster: {Name:force-systemd-env-429303 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:fo
rce-systemd-env-429303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 23:43:12.625921   48840 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0926 23:43:12.626022   48840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0926 23:43:12.674986   48840 cri.go:89] found id: ""
	I0926 23:43:12.675076   48840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 23:43:12.694000   48840 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 23:43:12.708108   48840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 23:43:12.724581   48840 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 23:43:12.724610   48840 kubeadm.go:157] found existing configuration files:
	
	I0926 23:43:12.724665   48840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 23:43:12.737584   48840 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 23:43:12.737652   48840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 23:43:12.752442   48840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 23:43:12.768746   48840 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 23:43:12.768817   48840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 23:43:12.783895   48840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 23:43:12.796567   48840 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 23:43:12.796650   48840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 23:43:12.816818   48840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 23:43:12.830524   48840 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 23:43:12.830607   48840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 23:43:12.843697   48840 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	W0926 23:43:12.585866   48726 pod_ready.go:104] pod "kube-controller-manager-pause-298014" is not "Ready", error: <nil>
	I0926 23:43:14.583781   48726 pod_ready.go:94] pod "kube-controller-manager-pause-298014" is "Ready"
	I0926 23:43:14.583814   48726 pod_ready.go:86] duration metric: took 4.007259459s for pod "kube-controller-manager-pause-298014" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:43:14.587530   48726 pod_ready.go:83] waiting for pod "kube-proxy-2s884" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:43:14.595420   48726 pod_ready.go:94] pod "kube-proxy-2s884" is "Ready"
	I0926 23:43:14.595443   48726 pod_ready.go:86] duration metric: took 7.882168ms for pod "kube-proxy-2s884" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:43:14.598843   48726 pod_ready.go:83] waiting for pod "kube-scheduler-pause-298014" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:43:14.605372   48726 pod_ready.go:94] pod "kube-scheduler-pause-298014" is "Ready"
	I0926 23:43:14.605408   48726 pod_ready.go:86] duration metric: took 6.538554ms for pod "kube-scheduler-pause-298014" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:43:14.605423   48726 pod_ready.go:40] duration metric: took 14.065639033s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:43:14.664762   48726 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 23:43:14.666706   48726 out.go:179] * Done! kubectl is now configured to use "pause-298014" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 26 23:43:17 pause-298014 crio[2823]: time="2025-09-26 23:43:17.849661285Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758930197849638023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=785f5285-076a-4c74-8af6-421a8b19dcad name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 23:43:17 pause-298014 crio[2823]: time="2025-09-26 23:43:17.850714322Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7af1493f-cc15-46b9-9ec3-252291517fe0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:17 pause-298014 crio[2823]: time="2025-09-26 23:43:17.851119209Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7af1493f-cc15-46b9-9ec3-252291517fe0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:17 pause-298014 crio[2823]: time="2025-09-26 23:43:17.851654214Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d68eba28720931f9bea435ef92878338e4044eb5b887c90cddd259d211ac054c,PodSandboxId:6838664d8ef4cf86d43524b0dee7eb55bb1570912ebe69c93b79ee1323948460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758930179539264204,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2s884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67bc6f73e4cc6a7047607766c90a393303e75e9841c7ad5f4a29a09e5b17ac9e,PodSandboxId:c50d1f9da7946adf9433685602ae2de060ecd12aab963fdd8882343b06e29719,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758930175731757529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c237e844ef8ee507d46fbb3a8e46be9,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257
,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c4b0724f9fc00bc656ba9261b9265a46b53536eed5e7fdc475aa37bd2a71193,PodSandboxId:328cd1b818aa760801a7f30787c63006341f23430ad6c18a6bc5fad6d47127a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758930175721066659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f67e4decc7a054ae1e94a2b570f4fc,},Annotations:map[string]string{io.kubern
etes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c30e76a20c21a36997ee2442af351e90271cd2bca90ad6e7e5235fa4b9490e,PodSandboxId:28be4bf6eed2d2025aabc479240b8901e2ce0b868efd70bdc9bfb31d2661c04c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758930169665723974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-74fdn,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 930aa1d0-38cf-4e8b-8d24-e674f37f457b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44947fdc81a0b7b0f80270fbf051c9cb1b239434ca02be9c632ac93d614f6b32,PodSandboxId:6838664d8ef4cf86d43524b0dee7eb55bb1570912ebe69c93b79ee1323948460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758930168428271595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2s884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75724a7941be277447f415fe53c5d8c9819d6e0af08d8f8498e83cdd5e272c84,PodSandboxId:0360178764e4aa305c53c0f01568555f409eb4344fe4f1b29d7084d36715d9b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758930168619834438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a45e85e56140d091175e85ada12059,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55b9eb9502426de9a5e026b5fc80072b333fd93b93de92b2eeefd78e8612539,PodSandboxId:25ba2c7c4ad25a5eca792bf48599310d1d34fd1e723563774f0063b34dfb8893,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5
864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758930168552716683,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67f6cccd1e1255753574888d6b0323d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb5d37736bbcd89b18982790b72506387ebcf54266e25bad635b4756357149c,PodSandboxId:328cd1b818aa760801a7f30787c63006341f23430ad6c18a6bc5fad6d47127a2,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1758930168477838140,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f67e4decc7a054ae1e94a2b570f4fc,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1567d2e11655dc909f2f494668cacd1951e28df8f20614765d182ca48a
60ecb5,PodSandboxId:c50d1f9da7946adf9433685602ae2de060ecd12aab963fdd8882343b06e29719,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758930168385646951,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c237e844ef8ee507d46fbb3a8e46be9,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51dc69520ea563551ffb542f8acc0a9060967383c75e5f980c2b4882cd666437,PodSandboxId:3de365eb9d21bee6f177fec83f72efc709917a5a0ff5ef4847f271796564e572,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758930113041881914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-74fdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930aa1d0-38cf-4e8b-8d24-e674f37f457b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b15910803a545e4a869d6f43ce6b53ef32a6a2034fa3c82a31296891c2caa16,PodSandboxId:4c31f34cff33baa7ff084ca213c09415014ab78d37782971e247f37439db6534,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758930098492232379,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-298014
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67f6cccd1e1255753574888d6b0323d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4291340e3901ff6ccd5af3a77e5ec33802ebc6e8e947a6db005215772968da3c,PodSandboxId:0903619f0bfa10f47049d7f4c8e5204a5e506aa1d1b01ca985c8fde7e4350ef7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758930098481134133,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a45e85e56140d091175e85ada12059,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7af1493f-cc15-46b9-9ec3-252291517fe0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:17 pause-298014 crio[2823]: time="2025-09-26 23:43:17.903035940Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1d9deac-65f9-4a44-ae82-7013c52aa28a name=/runtime.v1.RuntimeService/Version
	Sep 26 23:43:17 pause-298014 crio[2823]: time="2025-09-26 23:43:17.903245129Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1d9deac-65f9-4a44-ae82-7013c52aa28a name=/runtime.v1.RuntimeService/Version
	Sep 26 23:43:17 pause-298014 crio[2823]: time="2025-09-26 23:43:17.904950714Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=976709ff-d057-461f-82c2-6d7a959c518b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 23:43:17 pause-298014 crio[2823]: time="2025-09-26 23:43:17.905497410Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758930197905475070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=976709ff-d057-461f-82c2-6d7a959c518b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 23:43:17 pause-298014 crio[2823]: time="2025-09-26 23:43:17.907004578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02f90ad8-29c8-43eb-8cbb-84a8e0d735bd name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:17 pause-298014 crio[2823]: time="2025-09-26 23:43:17.907434016Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02f90ad8-29c8-43eb-8cbb-84a8e0d735bd name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:17 pause-298014 crio[2823]: time="2025-09-26 23:43:17.908599731Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d68eba28720931f9bea435ef92878338e4044eb5b887c90cddd259d211ac054c,PodSandboxId:6838664d8ef4cf86d43524b0dee7eb55bb1570912ebe69c93b79ee1323948460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758930179539264204,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2s884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67bc6f73e4cc6a7047607766c90a393303e75e9841c7ad5f4a29a09e5b17ac9e,PodSandboxId:c50d1f9da7946adf9433685602ae2de060ecd12aab963fdd8882343b06e29719,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758930175731757529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c237e844ef8ee507d46fbb3a8e46be9,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257
,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c4b0724f9fc00bc656ba9261b9265a46b53536eed5e7fdc475aa37bd2a71193,PodSandboxId:328cd1b818aa760801a7f30787c63006341f23430ad6c18a6bc5fad6d47127a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758930175721066659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f67e4decc7a054ae1e94a2b570f4fc,},Annotations:map[string]string{io.kubern
etes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c30e76a20c21a36997ee2442af351e90271cd2bca90ad6e7e5235fa4b9490e,PodSandboxId:28be4bf6eed2d2025aabc479240b8901e2ce0b868efd70bdc9bfb31d2661c04c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758930169665723974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-74fdn,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 930aa1d0-38cf-4e8b-8d24-e674f37f457b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44947fdc81a0b7b0f80270fbf051c9cb1b239434ca02be9c632ac93d614f6b32,PodSandboxId:6838664d8ef4cf86d43524b0dee7eb55bb1570912ebe69c93b79ee1323948460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758930168428271595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2s884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75724a7941be277447f415fe53c5d8c9819d6e0af08d8f8498e83cdd5e272c84,PodSandboxId:0360178764e4aa305c53c0f01568555f409eb4344fe4f1b29d7084d36715d9b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758930168619834438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a45e85e56140d091175e85ada12059,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55b9eb9502426de9a5e026b5fc80072b333fd93b93de92b2eeefd78e8612539,PodSandboxId:25ba2c7c4ad25a5eca792bf48599310d1d34fd1e723563774f0063b34dfb8893,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5
864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758930168552716683,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67f6cccd1e1255753574888d6b0323d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb5d37736bbcd89b18982790b72506387ebcf54266e25bad635b4756357149c,PodSandboxId:328cd1b818aa760801a7f30787c63006341f23430ad6c18a6bc5fad6d47127a2,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1758930168477838140,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f67e4decc7a054ae1e94a2b570f4fc,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1567d2e11655dc909f2f494668cacd1951e28df8f20614765d182ca48a
60ecb5,PodSandboxId:c50d1f9da7946adf9433685602ae2de060ecd12aab963fdd8882343b06e29719,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758930168385646951,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c237e844ef8ee507d46fbb3a8e46be9,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51dc69520ea563551ffb542f8acc0a9060967383c75e5f980c2b4882cd666437,PodSandboxId:3de365eb9d21bee6f177fec83f72efc709917a5a0ff5ef4847f271796564e572,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758930113041881914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-74fdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930aa1d0-38cf-4e8b-8d24-e674f37f457b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b15910803a545e4a869d6f43ce6b53ef32a6a2034fa3c82a31296891c2caa16,PodSandboxId:4c31f34cff33baa7ff084ca213c09415014ab78d37782971e247f37439db6534,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758930098492232379,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-298014
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67f6cccd1e1255753574888d6b0323d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4291340e3901ff6ccd5af3a77e5ec33802ebc6e8e947a6db005215772968da3c,PodSandboxId:0903619f0bfa10f47049d7f4c8e5204a5e506aa1d1b01ca985c8fde7e4350ef7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758930098481134133,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a45e85e56140d091175e85ada12059,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02f90ad8-29c8-43eb-8cbb-84a8e0d735bd name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:17 pause-298014 crio[2823]: time="2025-09-26 23:43:17.970578325Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ec6a8663-303c-43ae-a47c-c0afa0b0c0ec name=/runtime.v1.RuntimeService/Version
	Sep 26 23:43:17 pause-298014 crio[2823]: time="2025-09-26 23:43:17.970704991Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ec6a8663-303c-43ae-a47c-c0afa0b0c0ec name=/runtime.v1.RuntimeService/Version
	Sep 26 23:43:17 pause-298014 crio[2823]: time="2025-09-26 23:43:17.974902525Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0aece4e3-321e-412f-b8ae-3de999280863 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 23:43:17 pause-298014 crio[2823]: time="2025-09-26 23:43:17.975511178Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758930197975433316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0aece4e3-321e-412f-b8ae-3de999280863 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 23:43:17 pause-298014 crio[2823]: time="2025-09-26 23:43:17.976367040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea789a0d-9b1d-4f33-9522-0287a2300f2b name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:17 pause-298014 crio[2823]: time="2025-09-26 23:43:17.976584856Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea789a0d-9b1d-4f33-9522-0287a2300f2b name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:17 pause-298014 crio[2823]: time="2025-09-26 23:43:17.977574216Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d68eba28720931f9bea435ef92878338e4044eb5b887c90cddd259d211ac054c,PodSandboxId:6838664d8ef4cf86d43524b0dee7eb55bb1570912ebe69c93b79ee1323948460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758930179539264204,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2s884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67bc6f73e4cc6a7047607766c90a393303e75e9841c7ad5f4a29a09e5b17ac9e,PodSandboxId:c50d1f9da7946adf9433685602ae2de060ecd12aab963fdd8882343b06e29719,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758930175731757529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c237e844ef8ee507d46fbb3a8e46be9,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257
,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c4b0724f9fc00bc656ba9261b9265a46b53536eed5e7fdc475aa37bd2a71193,PodSandboxId:328cd1b818aa760801a7f30787c63006341f23430ad6c18a6bc5fad6d47127a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758930175721066659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f67e4decc7a054ae1e94a2b570f4fc,},Annotations:map[string]string{io.kubern
etes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c30e76a20c21a36997ee2442af351e90271cd2bca90ad6e7e5235fa4b9490e,PodSandboxId:28be4bf6eed2d2025aabc479240b8901e2ce0b868efd70bdc9bfb31d2661c04c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758930169665723974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-74fdn,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 930aa1d0-38cf-4e8b-8d24-e674f37f457b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44947fdc81a0b7b0f80270fbf051c9cb1b239434ca02be9c632ac93d614f6b32,PodSandboxId:6838664d8ef4cf86d43524b0dee7eb55bb1570912ebe69c93b79ee1323948460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758930168428271595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2s884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75724a7941be277447f415fe53c5d8c9819d6e0af08d8f8498e83cdd5e272c84,PodSandboxId:0360178764e4aa305c53c0f01568555f409eb4344fe4f1b29d7084d36715d9b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758930168619834438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a45e85e56140d091175e85ada12059,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55b9eb9502426de9a5e026b5fc80072b333fd93b93de92b2eeefd78e8612539,PodSandboxId:25ba2c7c4ad25a5eca792bf48599310d1d34fd1e723563774f0063b34dfb8893,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5
864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758930168552716683,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67f6cccd1e1255753574888d6b0323d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb5d37736bbcd89b18982790b72506387ebcf54266e25bad635b4756357149c,PodSandboxId:328cd1b818aa760801a7f30787c63006341f23430ad6c18a6bc5fad6d47127a2,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1758930168477838140,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f67e4decc7a054ae1e94a2b570f4fc,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1567d2e11655dc909f2f494668cacd1951e28df8f20614765d182ca48a
60ecb5,PodSandboxId:c50d1f9da7946adf9433685602ae2de060ecd12aab963fdd8882343b06e29719,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758930168385646951,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c237e844ef8ee507d46fbb3a8e46be9,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51dc69520ea563551ffb542f8acc0a9060967383c75e5f980c2b4882cd666437,PodSandboxId:3de365eb9d21bee6f177fec83f72efc709917a5a0ff5ef4847f271796564e572,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758930113041881914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-74fdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930aa1d0-38cf-4e8b-8d24-e674f37f457b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b15910803a545e4a869d6f43ce6b53ef32a6a2034fa3c82a31296891c2caa16,PodSandboxId:4c31f34cff33baa7ff084ca213c09415014ab78d37782971e247f37439db6534,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758930098492232379,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-298014
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67f6cccd1e1255753574888d6b0323d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4291340e3901ff6ccd5af3a77e5ec33802ebc6e8e947a6db005215772968da3c,PodSandboxId:0903619f0bfa10f47049d7f4c8e5204a5e506aa1d1b01ca985c8fde7e4350ef7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758930098481134133,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a45e85e56140d091175e85ada12059,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea789a0d-9b1d-4f33-9522-0287a2300f2b name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:18 pause-298014 crio[2823]: time="2025-09-26 23:43:18.046565469Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0eac203d-c25d-4563-ab15-439d7a42d8a5 name=/runtime.v1.RuntimeService/Version
	Sep 26 23:43:18 pause-298014 crio[2823]: time="2025-09-26 23:43:18.046672360Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0eac203d-c25d-4563-ab15-439d7a42d8a5 name=/runtime.v1.RuntimeService/Version
	Sep 26 23:43:18 pause-298014 crio[2823]: time="2025-09-26 23:43:18.048519597Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=05042867-dea0-4f17-ada6-e42639a5b555 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 23:43:18 pause-298014 crio[2823]: time="2025-09-26 23:43:18.049744117Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758930198049718098,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=05042867-dea0-4f17-ada6-e42639a5b555 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 26 23:43:18 pause-298014 crio[2823]: time="2025-09-26 23:43:18.050740599Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f7c4d62-8cea-45c5-ac93-4ec7738dd3ef name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:18 pause-298014 crio[2823]: time="2025-09-26 23:43:18.050843914Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f7c4d62-8cea-45c5-ac93-4ec7738dd3ef name=/runtime.v1.RuntimeService/ListContainers
	Sep 26 23:43:18 pause-298014 crio[2823]: time="2025-09-26 23:43:18.051104593Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d68eba28720931f9bea435ef92878338e4044eb5b887c90cddd259d211ac054c,PodSandboxId:6838664d8ef4cf86d43524b0dee7eb55bb1570912ebe69c93b79ee1323948460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758930179539264204,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2s884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67bc6f73e4cc6a7047607766c90a393303e75e9841c7ad5f4a29a09e5b17ac9e,PodSandboxId:c50d1f9da7946adf9433685602ae2de060ecd12aab963fdd8882343b06e29719,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758930175731757529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c237e844ef8ee507d46fbb3a8e46be9,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257
,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c4b0724f9fc00bc656ba9261b9265a46b53536eed5e7fdc475aa37bd2a71193,PodSandboxId:328cd1b818aa760801a7f30787c63006341f23430ad6c18a6bc5fad6d47127a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758930175721066659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f67e4decc7a054ae1e94a2b570f4fc,},Annotations:map[string]string{io.kubern
etes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c30e76a20c21a36997ee2442af351e90271cd2bca90ad6e7e5235fa4b9490e,PodSandboxId:28be4bf6eed2d2025aabc479240b8901e2ce0b868efd70bdc9bfb31d2661c04c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758930169665723974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-74fdn,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 930aa1d0-38cf-4e8b-8d24-e674f37f457b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44947fdc81a0b7b0f80270fbf051c9cb1b239434ca02be9c632ac93d614f6b32,PodSandboxId:6838664d8ef4cf86d43524b0dee7eb55bb1570912ebe69c93b79ee1323948460,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758930168428271595,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2s884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75724a7941be277447f415fe53c5d8c9819d6e0af08d8f8498e83cdd5e272c84,PodSandboxId:0360178764e4aa305c53c0f01568555f409eb4344fe4f1b29d7084d36715d9b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758930168619834438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a45e85e56140d091175e85ada12059,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55b9eb9502426de9a5e026b5fc80072b333fd93b93de92b2eeefd78e8612539,PodSandboxId:25ba2c7c4ad25a5eca792bf48599310d1d34fd1e723563774f0063b34dfb8893,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5
864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758930168552716683,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67f6cccd1e1255753574888d6b0323d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb5d37736bbcd89b18982790b72506387ebcf54266e25bad635b4756357149c,PodSandboxId:328cd1b818aa760801a7f30787c63006341f23430ad6c18a6bc5fad6d47127a2,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1758930168477838140,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f67e4decc7a054ae1e94a2b570f4fc,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1567d2e11655dc909f2f494668cacd1951e28df8f20614765d182ca48a
60ecb5,PodSandboxId:c50d1f9da7946adf9433685602ae2de060ecd12aab963fdd8882343b06e29719,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758930168385646951,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c237e844ef8ee507d46fbb3a8e46be9,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51dc69520ea563551ffb542f8acc0a9060967383c75e5f980c2b4882cd666437,PodSandboxId:3de365eb9d21bee6f177fec83f72efc709917a5a0ff5ef4847f271796564e572,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758930113041881914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-74fdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930aa1d0-38cf-4e8b-8d24-e674f37f457b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b15910803a545e4a869d6f43ce6b53ef32a6a2034fa3c82a31296891c2caa16,PodSandboxId:4c31f34cff33baa7ff084ca213c09415014ab78d37782971e247f37439db6534,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758930098492232379,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-298014
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67f6cccd1e1255753574888d6b0323d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4291340e3901ff6ccd5af3a77e5ec33802ebc6e8e947a6db005215772968da3c,PodSandboxId:0903619f0bfa10f47049d7f4c8e5204a5e506aa1d1b01ca985c8fde7e4350ef7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758930098481134133,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-298014,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a45e85e56140d091175e85ada12059,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f7c4d62-8cea-45c5-ac93-4ec7738dd3ef name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d68eba2872093       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   18 seconds ago       Running             kube-proxy                2                   6838664d8ef4c       kube-proxy-2s884
	67bc6f73e4cc6       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   22 seconds ago       Running             kube-controller-manager   2                   c50d1f9da7946       kube-controller-manager-pause-298014
	9c4b0724f9fc0       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   22 seconds ago       Running             kube-apiserver            2                   328cd1b818aa7       kube-apiserver-pause-298014
	99c30e76a20c2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   28 seconds ago       Running             coredns                   1                   28be4bf6eed2d       coredns-66bc5c9577-74fdn
	75724a7941be2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   29 seconds ago       Running             etcd                      1                   0360178764e4a       etcd-pause-298014
	a55b9eb950242       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   29 seconds ago       Running             kube-scheduler            1                   25ba2c7c4ad25       kube-scheduler-pause-298014
	0eb5d37736bbc       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   29 seconds ago       Exited              kube-apiserver            1                   328cd1b818aa7       kube-apiserver-pause-298014
	44947fdc81a0b       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   29 seconds ago       Exited              kube-proxy                1                   6838664d8ef4c       kube-proxy-2s884
	1567d2e11655d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   29 seconds ago       Exited              kube-controller-manager   1                   c50d1f9da7946       kube-controller-manager-pause-298014
	51dc69520ea56       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   3de365eb9d21b       coredns-66bc5c9577-74fdn
	2b15910803a54       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   About a minute ago   Exited              kube-scheduler            0                   4c31f34cff33b       kube-scheduler-pause-298014
	4291340e3901f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Exited              etcd                      0                   0903619f0bfa1       etcd-pause-298014
	
	
	==> coredns [51dc69520ea563551ffb542f8acc0a9060967383c75e5f980c2b4882cd666437] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	[INFO] Reloading complete
	[INFO] 127.0.0.1:39755 - 40679 "HINFO IN 8554951345633849584.1848918287105691358. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020050927s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [99c30e76a20c21a36997ee2442af351e90271cd2bca90ad6e7e5235fa4b9490e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] 127.0.0.1:60138 - 52575 "HINFO IN 2465005106924278844.7349295519571415587. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012179428s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               pause-298014
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-298014
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=pause-298014
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T23_41_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 23:41:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-298014
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 23:43:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 23:42:58 +0000   Fri, 26 Sep 2025 23:41:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 23:42:58 +0000   Fri, 26 Sep 2025 23:41:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 23:42:58 +0000   Fri, 26 Sep 2025 23:41:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 23:42:58 +0000   Fri, 26 Sep 2025 23:41:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.242
	  Hostname:    pause-298014
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 a57c05bd83b1481b8bb3b7452b744da5
	  System UUID:                a57c05bd-83b1-481b-8bb3-b7452b744da5
	  Boot ID:                    0485fae6-db91-4be6-a593-b2700010b548
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-74fdn                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     87s
	  kube-system                 etcd-pause-298014                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         92s
	  kube-system                 kube-apiserver-pause-298014             250m (12%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-controller-manager-pause-298014    200m (10%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-proxy-2s884                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-scheduler-pause-298014             100m (5%)     0 (0%)      0 (0%)           0 (0%)         92s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 85s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeHasSufficientPID     92s                kubelet          Node pause-298014 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  92s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  92s                kubelet          Node pause-298014 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    92s                kubelet          Node pause-298014 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                92s                kubelet          Node pause-298014 status is now: NodeReady
	  Normal  Starting                 92s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           88s                node-controller  Node pause-298014 event: Registered Node pause-298014 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node pause-298014 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node pause-298014 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node pause-298014 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                node-controller  Node pause-298014 event: Registered Node pause-298014 in Controller
	
	
	==> dmesg <==
	[Sep26 23:41] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000077] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005185] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.520681] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.086956] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.151630] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.165735] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.024894] kauditd_printk_skb: 18 callbacks suppressed
	[Sep26 23:42] kauditd_printk_skb: 219 callbacks suppressed
	[ +25.937198] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.127220] kauditd_printk_skb: 319 callbacks suppressed
	[Sep26 23:43] kauditd_printk_skb: 63 callbacks suppressed
	[  +4.763759] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [4291340e3901ff6ccd5af3a77e5ec33802ebc6e8e947a6db005215772968da3c] <==
	{"level":"warn","ts":"2025-09-26T23:41:43.581796Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.215037ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-09-26T23:41:43.581915Z","caller":"traceutil/trace.go:172","msg":"trace[1582729659] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:70; }","duration":"153.357746ms","start":"2025-09-26T23:41:43.428546Z","end":"2025-09-26T23:41:43.581904Z","steps":["trace[1582729659] 'range keys from in-memory index tree'  (duration: 143.149961ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T23:41:43.584212Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.147518ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1763891015197594084 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:discovery\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:discovery\" value_size:587 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-09-26T23:41:43.584482Z","caller":"traceutil/trace.go:172","msg":"trace[1098904871] transaction","detail":"{read_only:false; response_revision:71; number_of_response:1; }","duration":"203.088579ms","start":"2025-09-26T23:41:43.381381Z","end":"2025-09-26T23:41:43.584469Z","steps":["trace[1098904871] 'process raft request'  (duration: 57.2432ms)","trace[1098904871] 'compare'  (duration: 143.00479ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-26T23:41:43.780399Z","caller":"traceutil/trace.go:172","msg":"trace[79517878] transaction","detail":"{read_only:false; response_revision:72; number_of_response:1; }","duration":"189.614813ms","start":"2025-09-26T23:41:43.590651Z","end":"2025-09-26T23:41:43.780266Z","steps":["trace[79517878] 'process raft request'  (duration: 128.195699ms)","trace[79517878] 'compare'  (duration: 61.32184ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T23:42:26.916430Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.693061ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T23:42:26.916510Z","caller":"traceutil/trace.go:172","msg":"trace[1775930061] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:402; }","duration":"120.83423ms","start":"2025-09-26T23:42:26.795661Z","end":"2025-09-26T23:42:26.916496Z","steps":["trace[1775930061] 'range keys from in-memory index tree'  (duration: 120.555635ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:42:38.438240Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-26T23:42:38.438435Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-298014","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.242:2380"],"advertise-client-urls":["https://192.168.83.242:2379"]}
	{"level":"error","ts":"2025-09-26T23:42:38.438555Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T23:42:38.452515Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-09-26T23:42:38.529651Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.242:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T23:42:38.530034Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.242:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-26T23:42:38.530175Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.242:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-26T23:42:38.529835Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T23:42:38.529873Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"35987a252efe187a","current-leader-member-id":"35987a252efe187a"}
	{"level":"warn","ts":"2025-09-26T23:42:38.530006Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T23:42:38.530437Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-26T23:42:38.530448Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-09-26T23:42:38.530453Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T23:42:38.530464Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-26T23:42:38.534200Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.83.242:2380"}
	{"level":"error","ts":"2025-09-26T23:42:38.534383Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.242:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T23:42:38.534429Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.83.242:2380"}
	{"level":"info","ts":"2025-09-26T23:42:38.534441Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-298014","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.242:2380"],"advertise-client-urls":["https://192.168.83.242:2379"]}
	
	
	==> etcd [75724a7941be277447f415fe53c5d8c9819d6e0af08d8f8498e83cdd5e272c84] <==
	{"level":"warn","ts":"2025-09-26T23:42:57.473144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.489802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.499037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.506413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.533562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.544001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.551401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.562336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.569800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.579455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.588364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.596351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.608395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.616403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.627939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.636608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.646355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.654687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.664652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.672489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.683588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.702482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.712552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.722083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:42:57.797527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43806","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:43:18 up 2 min,  0 users,  load average: 1.61, 0.63, 0.23
	Linux pause-298014 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0eb5d37736bbcd89b18982790b72506387ebcf54266e25bad635b4756357149c] <==
	E0926 23:42:52.448078       1 reflector.go:205] "Failed to watch" err="Get \"https://localhost:8443/api/v1/secrets?allowWatchBookmarks=true&resourceVersion=411&timeout=6m12s&timeoutSeconds=372&watch=true\": context canceled" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Secret"
	E0926 23:42:52.448143       1 reflector.go:205] "Failed to watch" err="Get \"https://localhost:8443/apis/admissionregistration.k8s.io/v1/validatingadmissionpolicies?allowWatchBookmarks=true&resourceVersion=411&timeout=6m40s&timeoutSeconds=400&watch=true\": context canceled" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ValidatingAdmissionPolicy"
	E0926 23:42:52.448191       1 reflector.go:205] "Failed to watch" err="Get \"https://localhost:8443/apis/networking.k8s.io/v1/ingressclasses?allowWatchBookmarks=true&resourceVersion=411&timeout=7m50s&timeoutSeconds=470&watch=true\": context canceled" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.IngressClass"
	E0926 23:42:52.448243       1 reflector.go:205] "Failed to watch" err="Get \"https://localhost:8443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=411&timeout=5m56s&timeoutSeconds=356&watch=true\": context canceled" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceQuota"
	I0926 23:42:52.448370       1 crd_finalizer.go:273] Shutting down CRDFinalizer
	E0926 23:42:52.448464       1 reflector.go:205] "Failed to watch" err="Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles?allowWatchBookmarks=true&resourceVersion=411&timeout=7m12s&timeoutSeconds=432&watch=true\": context canceled" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ClusterRole"
	E0926 23:42:52.448501       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for RemoteAvailability controller" logger="UnhandledError"
	E0926 23:42:52.448525       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="configmaps"
	E0926 23:42:52.448548       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="kubernetes-service-cidr-controller"
	E0926 23:42:52.448569       1 system_namespaces_controller.go:69] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	F0926 23:42:52.448609       1 hooks.go:204] PostStartHook "priority-and-fairness-config-producer" failed: APF bootstrap ensurer timed out waiting for cache sync
	I0926 23:42:52.555634       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	I0926 23:42:52.555719       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0926 23:42:52.555789       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0926 23:42:52.555831       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0926 23:42:52.557648       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0926 23:42:52.557695       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0926 23:42:52.557706       1 policy_source.go:240] refreshing policies
	E0926 23:42:52.557749       1 plugin.go:185] "Unhandled Error" err="policy source context unexpectedly closed: handler {0x1e0c480 0x1e0c460 0x1e0c440} was not added to shared informer because it has stopped already" logger="UnhandledError"
	I0926 23:42:52.557898       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0926 23:42:52.558229       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0926 23:42:52.559429       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0926 23:42:52.559459       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0926 23:42:52.559477       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0926 23:42:52.561251       1 reflector.go:205] "Failed to watch" err="Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=411&timeout=5m48s&timeoutSeconds=348&watch=true\": context canceled" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RoleBinding"
	
	
	==> kube-apiserver [9c4b0724f9fc00bc656ba9261b9265a46b53536eed5e7fdc475aa37bd2a71193] <==
	I0926 23:42:58.650720       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0926 23:42:58.650729       1 cache.go:39] Caches are synced for autoregister controller
	I0926 23:42:58.653557       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0926 23:42:58.654339       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0926 23:42:58.654455       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I0926 23:42:58.655055       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0926 23:42:58.660494       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0926 23:42:58.660544       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0926 23:42:58.661545       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I0926 23:42:58.661613       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0926 23:42:58.672619       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0926 23:42:58.675128       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0926 23:42:58.675339       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I0926 23:42:58.684195       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0926 23:42:58.684405       1 policy_source.go:240] refreshing policies
	I0926 23:42:58.693856       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0926 23:42:59.270678       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0926 23:42:59.360404       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0926 23:43:00.095771       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0926 23:43:00.157686       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0926 23:43:00.197546       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0926 23:43:00.205508       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0926 23:43:02.005270       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0926 23:43:02.248403       1 controller.go:667] quota admission added evaluator for: endpoints
	I0926 23:43:06.807659       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [1567d2e11655dc909f2f494668cacd1951e28df8f20614765d182ca48a60ecb5] <==
	I0926 23:42:50.479346       1 serving.go:386] Generated self-signed cert in-memory
	I0926 23:42:51.311160       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0926 23:42:51.311199       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 23:42:51.313714       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0926 23:42:51.313819       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0926 23:42:51.314428       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0926 23:42:51.314917       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [67bc6f73e4cc6a7047607766c90a393303e75e9841c7ad5f4a29a09e5b17ac9e] <==
	I0926 23:43:02.025583       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0926 23:43:02.026872       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0926 23:43:02.028366       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0926 23:43:02.032782       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0926 23:43:02.037990       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0926 23:43:02.041678       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0926 23:43:02.041900       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0926 23:43:02.041937       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0926 23:43:02.041941       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0926 23:43:02.041947       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0926 23:43:02.043840       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0926 23:43:02.043908       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0926 23:43:02.044009       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0926 23:43:02.044141       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0926 23:43:02.044142       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0926 23:43:02.044242       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-298014"
	I0926 23:43:02.044382       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0926 23:43:02.044472       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0926 23:43:02.043897       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0926 23:43:02.044676       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0926 23:43:02.044873       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0926 23:43:02.045130       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0926 23:43:02.049262       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0926 23:43:02.049673       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0926 23:43:02.050200       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	
	
	==> kube-proxy [44947fdc81a0b7b0f80270fbf051c9cb1b239434ca02be9c632ac93d614f6b32] <==
	I0926 23:42:50.224500       1 server_linux.go:53] "Using iptables proxy"
	I0926 23:42:51.081707       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	
	
	==> kube-proxy [d68eba28720931f9bea435ef92878338e4044eb5b887c90cddd259d211ac054c] <==
	I0926 23:42:59.758940       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 23:42:59.859749       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 23:42:59.859812       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.242"]
	E0926 23:42:59.859923       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 23:42:59.921516       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0926 23:42:59.921636       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0926 23:42:59.921658       1 server_linux.go:132] "Using iptables Proxier"
	I0926 23:42:59.948938       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 23:42:59.949337       1 server.go:527] "Version info" version="v1.34.0"
	I0926 23:42:59.949352       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 23:42:59.954685       1 config.go:200] "Starting service config controller"
	I0926 23:42:59.954697       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 23:42:59.954714       1 config.go:106] "Starting endpoint slice config controller"
	I0926 23:42:59.954717       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 23:42:59.954726       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 23:42:59.954729       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 23:42:59.956117       1 config.go:309] "Starting node config controller"
	I0926 23:42:59.958081       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 23:42:59.958132       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 23:43:00.055843       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 23:43:00.055957       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 23:43:00.056251       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2b15910803a545e4a869d6f43ce6b53ef32a6a2034fa3c82a31296891c2caa16] <==
	E0926 23:41:43.028680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 23:41:43.028866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 23:41:43.029228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 23:41:43.029420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 23:41:43.927488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 23:41:43.956779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 23:41:44.008183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 23:41:44.071914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 23:41:44.132557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 23:41:44.196987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 23:41:44.231009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 23:41:44.257800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0926 23:41:44.265561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 23:41:44.268166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 23:41:44.341165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 23:41:44.370389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 23:41:44.395930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 23:41:44.406676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I0926 23:41:45.915409       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 23:42:38.446202       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0926 23:42:38.446271       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0926 23:42:38.451526       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0926 23:42:38.451618       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 23:42:38.451879       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0926 23:42:38.451946       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a55b9eb9502426de9a5e026b5fc80072b333fd93b93de92b2eeefd78e8612539] <==
	E0926 23:42:54.754250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.83.242:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.83.242:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 23:42:54.953080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.83.242:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.83.242:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 23:42:55.081464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.83.242:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.83.242:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 23:42:55.164576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.83.242:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.83.242:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 23:42:55.165855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.83.242:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.83.242:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 23:42:58.521383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0926 23:42:58.523831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 23:42:58.524041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 23:42:58.524113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 23:42:58.524273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 23:42:58.524399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 23:42:58.524459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 23:42:58.524518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 23:42:58.524574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 23:42:58.524643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 23:42:58.524692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 23:42:58.524748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 23:42:58.524804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 23:42:58.524862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 23:42:58.524922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 23:42:58.524992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 23:42:58.525052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 23:42:58.525115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 23:42:58.525178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I0926 23:43:03.768577       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 26 23:42:57 pause-298014 kubelet[3787]: E0926 23:42:57.442731    3787 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-298014\" not found" node="pause-298014"
	Sep 26 23:42:57 pause-298014 kubelet[3787]: E0926 23:42:57.442798    3787 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-298014\" not found" node="pause-298014"
	Sep 26 23:42:57 pause-298014 kubelet[3787]: E0926 23:42:57.444480    3787 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-298014\" not found" node="pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: E0926 23:42:58.447126    3787 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-298014\" not found" node="pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: I0926 23:42:58.537835    3787 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: E0926 23:42:58.765564    3787 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-298014\" already exists" pod="kube-system/kube-scheduler-pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: I0926 23:42:58.765627    3787 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: E0926 23:42:58.778955    3787 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-298014\" already exists" pod="kube-system/etcd-pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: I0926 23:42:58.779017    3787 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: E0926 23:42:58.795848    3787 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-298014\" already exists" pod="kube-system/kube-apiserver-pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: I0926 23:42:58.795971    3787 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: I0926 23:42:58.802069    3787 kubelet_node_status.go:124] "Node was previously registered" node="pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: I0926 23:42:58.802154    3787 kubelet_node_status.go:78] "Successfully registered node" node="pause-298014"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: I0926 23:42:58.802180    3787 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: I0926 23:42:58.803703    3787 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 26 23:42:58 pause-298014 kubelet[3787]: E0926 23:42:58.809858    3787 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-298014\" already exists" pod="kube-system/kube-controller-manager-pause-298014"
	Sep 26 23:42:59 pause-298014 kubelet[3787]: I0926 23:42:59.218915    3787 apiserver.go:52] "Watching apiserver"
	Sep 26 23:42:59 pause-298014 kubelet[3787]: I0926 23:42:59.240783    3787 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 26 23:42:59 pause-298014 kubelet[3787]: I0926 23:42:59.264805    3787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43-xtables-lock\") pod \"kube-proxy-2s884\" (UID: \"eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43\") " pod="kube-system/kube-proxy-2s884"
	Sep 26 23:42:59 pause-298014 kubelet[3787]: I0926 23:42:59.266220    3787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43-lib-modules\") pod \"kube-proxy-2s884\" (UID: \"eecd3ea5-b61d-47e0-8c88-4ff19ebe1b43\") " pod="kube-system/kube-proxy-2s884"
	Sep 26 23:42:59 pause-298014 kubelet[3787]: I0926 23:42:59.523744    3787 scope.go:117] "RemoveContainer" containerID="44947fdc81a0b7b0f80270fbf051c9cb1b239434ca02be9c632ac93d614f6b32"
	Sep 26 23:43:05 pause-298014 kubelet[3787]: E0926 23:43:05.398522    3787 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758930185397659925  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 26 23:43:05 pause-298014 kubelet[3787]: E0926 23:43:05.398579    3787 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758930185397659925  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 26 23:43:15 pause-298014 kubelet[3787]: E0926 23:43:15.402169    3787 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758930195400774313  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 26 23:43:15 pause-298014 kubelet[3787]: E0926 23:43:15.402211    3787 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758930195400774313  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-298014 -n pause-298014
helpers_test.go:269: (dbg) Run:  kubectl --context pause-298014 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (48.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9wwwt" [765bffdb-42c1-4742-b6f6-448a5ca12c32] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:337: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-994238 -n embed-certs-994238
start_stop_delete_test.go:272: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-27 00:01:56.958483816 +0000 UTC m=+5582.788883289
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-994238 describe po kubernetes-dashboard-855c9754f9-9wwwt -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context embed-certs-994238 describe po kubernetes-dashboard-855c9754f9-9wwwt -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-9wwwt
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-994238/192.168.72.66
Start Time:       Fri, 26 Sep 2025 23:52:44 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w464s (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-w464s:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m12s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9wwwt to embed-certs-994238
Warning  Failed     8m42s                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     8m                      kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m54s (x5 over 9m12s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m24s (x5 over 8m42s)   kubelet            Error: ErrImagePull
Warning  Failed     3m24s (x3 over 7m5s)    kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m21s (x16 over 8m41s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    77s (x21 over 8m41s)    kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-994238 logs kubernetes-dashboard-855c9754f9-9wwwt -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context embed-certs-994238 logs kubernetes-dashboard-855c9754f9-9wwwt -n kubernetes-dashboard: exit status 1 (80.252014ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-9wwwt" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context embed-certs-994238 logs kubernetes-dashboard-855c9754f9-9wwwt -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-994238 -n embed-certs-994238
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-994238 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-994238 logs -n 25: (1.411423737s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-421834 sudo iptables -t nat -L -n -v                                 │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo systemctl status kubelet --all --full --no-pager         │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo systemctl cat kubelet --no-pager                         │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo journalctl -xeu kubelet --all --full --no-pager          │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo cat /etc/kubernetes/kubelet.conf                         │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo cat /var/lib/kubelet/config.yaml                         │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo systemctl status docker --all --full --no-pager          │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │                     │
	│ ssh     │ -p bridge-421834 sudo systemctl cat docker --no-pager                          │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo cat /etc/docker/daemon.json                              │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo docker system info                                       │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │                     │
	│ ssh     │ -p bridge-421834 sudo systemctl status cri-docker --all --full --no-pager      │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │                     │
	│ ssh     │ -p bridge-421834 sudo systemctl cat cri-docker --no-pager                      │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │                     │
	│ ssh     │ -p bridge-421834 sudo cat /usr/lib/systemd/system/cri-docker.service           │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo cri-dockerd --version                                    │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo systemctl status containerd --all --full --no-pager      │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │                     │
	│ ssh     │ -p bridge-421834 sudo systemctl cat containerd --no-pager                      │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo cat /lib/systemd/system/containerd.service               │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo cat /etc/containerd/config.toml                          │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo containerd config dump                                   │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo systemctl status crio --all --full --no-pager            │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo systemctl cat crio --no-pager                            │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo crio config                                              │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ delete  │ -p bridge-421834                                                               │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 23:53:03
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 23:53:03.230222   66389 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:53:03.230606   66389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:53:03.230625   66389 out.go:374] Setting ErrFile to fd 2...
	I0926 23:53:03.230632   66389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:53:03.231015   66389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	I0926 23:53:03.231745   66389 out.go:368] Setting JSON to false
	I0926 23:53:03.233328   66389 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5728,"bootTime":1758925055,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 23:53:03.233417   66389 start.go:140] virtualization: kvm guest
	I0926 23:53:03.235488   66389 out.go:179] * [bridge-421834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 23:53:03.236968   66389 notify.go:220] Checking for updates...
	I0926 23:53:03.236990   66389 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 23:53:03.238477   66389 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 23:53:03.239701   66389 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 23:53:03.241110   66389 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 23:53:03.242715   66389 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 23:53:03.244044   66389 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 23:53:03.246323   66389 config.go:182] Loaded profile config "embed-certs-994238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:53:03.246463   66389 config.go:182] Loaded profile config "enable-default-cni-421834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:53:03.246577   66389 config.go:182] Loaded profile config "flannel-421834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:53:03.246697   66389 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 23:53:03.285672   66389 out.go:179] * Using the kvm2 driver based on user configuration
	I0926 23:53:03.286916   66389 start.go:304] selected driver: kvm2
	I0926 23:53:03.286939   66389 start.go:924] validating driver "kvm2" against <nil>
	I0926 23:53:03.286956   66389 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 23:53:03.288092   66389 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 23:53:03.288200   66389 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 23:53:03.304645   66389 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 23:53:03.304690   66389 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 23:53:03.321296   66389 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 23:53:03.321352   66389 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 23:53:03.321741   66389 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 23:53:03.321798   66389 cni.go:84] Creating CNI manager for "bridge"
	I0926 23:53:03.321812   66389 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 23:53:03.321891   66389 start.go:348] cluster config:
	{Name:bridge-421834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-421834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:
1m0s}
	I0926 23:53:03.322027   66389 iso.go:125] acquiring lock: {Name:mk665cb8117fd96bfc46b1e5a29611848cf59d97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 23:53:03.323954   66389 out.go:179] * Starting "bridge-421834" primary control-plane node in "bridge-421834" cluster
	I0926 23:53:03.325338   66389 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 23:53:03.325392   66389 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0926 23:53:03.325406   66389 cache.go:58] Caching tarball of preloaded images
	I0926 23:53:03.325548   66389 preload.go:172] Found /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0926 23:53:03.325566   66389 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0926 23:53:03.325711   66389 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/config.json ...
	I0926 23:53:03.325746   66389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/config.json: {Name:mkc3cbb36558969d3f714e3524b9d6df6545a49f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:03.326026   66389 start.go:360] acquireMachinesLock for bridge-421834: {Name:mk2abc374bcfc09d0b998f1b70bb443182c23d46 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 23:53:03.326086   66389 start.go:364] duration metric: took 37.574µs to acquireMachinesLock for "bridge-421834"
	I0926 23:53:03.326111   66389 start.go:93] Provisioning new machine with config: &{Name:bridge-421834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:bridge-421834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 23:53:03.326176   66389 start.go:125] createHost starting for "" (driver="kvm2")
	W0926 23:53:04.650091   62447 pod_ready.go:104] pod "coredns-66bc5c9577-b2hgd" is not "Ready", error: <nil>
	W0926 23:53:07.151106   62447 pod_ready.go:104] pod "coredns-66bc5c9577-b2hgd" is not "Ready", error: <nil>
	I0926 23:53:03.327735   66389 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0926 23:53:03.327974   66389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:03.328034   66389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:03.342698   66389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33427
	I0926 23:53:03.343259   66389 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:03.343858   66389 main.go:141] libmachine: Using API Version  1
	I0926 23:53:03.343896   66389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:03.344286   66389 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:03.344651   66389 main.go:141] libmachine: (bridge-421834) Calling .GetMachineName
	I0926 23:53:03.344846   66389 main.go:141] libmachine: (bridge-421834) Calling .DriverName
	I0926 23:53:03.345076   66389 start.go:159] libmachine.API.Create for "bridge-421834" (driver="kvm2")
	I0926 23:53:03.345129   66389 client.go:168] LocalClient.Create starting
	I0926 23:53:03.345171   66389 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem
	I0926 23:53:03.345218   66389 main.go:141] libmachine: Decoding PEM data...
	I0926 23:53:03.345238   66389 main.go:141] libmachine: Parsing certificate...
	I0926 23:53:03.345312   66389 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem
	I0926 23:53:03.345344   66389 main.go:141] libmachine: Decoding PEM data...
	I0926 23:53:03.345373   66389 main.go:141] libmachine: Parsing certificate...
	I0926 23:53:03.345402   66389 main.go:141] libmachine: Running pre-create checks...
	I0926 23:53:03.345413   66389 main.go:141] libmachine: (bridge-421834) Calling .PreCreateCheck
	I0926 23:53:03.345724   66389 main.go:141] libmachine: (bridge-421834) Calling .GetConfigRaw
	I0926 23:53:03.346239   66389 main.go:141] libmachine: Creating machine...
	I0926 23:53:03.346261   66389 main.go:141] libmachine: (bridge-421834) Calling .Create
	I0926 23:53:03.346383   66389 main.go:141] libmachine: (bridge-421834) creating domain...
	I0926 23:53:03.346403   66389 main.go:141] libmachine: (bridge-421834) creating network...
	I0926 23:53:03.348128   66389 main.go:141] libmachine: (bridge-421834) DBG | found existing default network
	I0926 23:53:03.348334   66389 main.go:141] libmachine: (bridge-421834) DBG | <network connections='3'>
	I0926 23:53:03.348355   66389 main.go:141] libmachine: (bridge-421834) DBG |   <name>default</name>
	I0926 23:53:03.348368   66389 main.go:141] libmachine: (bridge-421834) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0926 23:53:03.348383   66389 main.go:141] libmachine: (bridge-421834) DBG |   <forward mode='nat'>
	I0926 23:53:03.348395   66389 main.go:141] libmachine: (bridge-421834) DBG |     <nat>
	I0926 23:53:03.348408   66389 main.go:141] libmachine: (bridge-421834) DBG |       <port start='1024' end='65535'/>
	I0926 23:53:03.348421   66389 main.go:141] libmachine: (bridge-421834) DBG |     </nat>
	I0926 23:53:03.348436   66389 main.go:141] libmachine: (bridge-421834) DBG |   </forward>
	I0926 23:53:03.348452   66389 main.go:141] libmachine: (bridge-421834) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0926 23:53:03.348466   66389 main.go:141] libmachine: (bridge-421834) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0926 23:53:03.348476   66389 main.go:141] libmachine: (bridge-421834) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0926 23:53:03.348482   66389 main.go:141] libmachine: (bridge-421834) DBG |     <dhcp>
	I0926 23:53:03.348490   66389 main.go:141] libmachine: (bridge-421834) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0926 23:53:03.348500   66389 main.go:141] libmachine: (bridge-421834) DBG |     </dhcp>
	I0926 23:53:03.348507   66389 main.go:141] libmachine: (bridge-421834) DBG |   </ip>
	I0926 23:53:03.348514   66389 main.go:141] libmachine: (bridge-421834) DBG | </network>
	I0926 23:53:03.348522   66389 main.go:141] libmachine: (bridge-421834) DBG | 
	I0926 23:53:03.349251   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:03.349100   66416 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:95:3f:8a} reservation:<nil>}
	I0926 23:53:03.349892   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:03.349767   66416 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:e6:92:05} reservation:<nil>}
	I0926 23:53:03.350594   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:03.350504   66416 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000292aa0}
	I0926 23:53:03.350611   66389 main.go:141] libmachine: (bridge-421834) DBG | defining private network:
	I0926 23:53:03.350674   66389 main.go:141] libmachine: (bridge-421834) DBG | 
	I0926 23:53:03.350700   66389 main.go:141] libmachine: (bridge-421834) DBG | <network>
	I0926 23:53:03.350711   66389 main.go:141] libmachine: (bridge-421834) DBG |   <name>mk-bridge-421834</name>
	I0926 23:53:03.350720   66389 main.go:141] libmachine: (bridge-421834) DBG |   <dns enable='no'/>
	I0926 23:53:03.350734   66389 main.go:141] libmachine: (bridge-421834) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0926 23:53:03.350756   66389 main.go:141] libmachine: (bridge-421834) DBG |     <dhcp>
	I0926 23:53:03.350770   66389 main.go:141] libmachine: (bridge-421834) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0926 23:53:03.350780   66389 main.go:141] libmachine: (bridge-421834) DBG |     </dhcp>
	I0926 23:53:03.350788   66389 main.go:141] libmachine: (bridge-421834) DBG |   </ip>
	I0926 23:53:03.350801   66389 main.go:141] libmachine: (bridge-421834) DBG | </network>
	I0926 23:53:03.350811   66389 main.go:141] libmachine: (bridge-421834) DBG | 
	I0926 23:53:03.356908   66389 main.go:141] libmachine: (bridge-421834) DBG | creating private network mk-bridge-421834 192.168.61.0/24...
	I0926 23:53:03.447178   66389 main.go:141] libmachine: (bridge-421834) DBG | private network mk-bridge-421834 192.168.61.0/24 created
	I0926 23:53:03.447463   66389 main.go:141] libmachine: (bridge-421834) DBG | <network>
	I0926 23:53:03.447481   66389 main.go:141] libmachine: (bridge-421834) DBG |   <name>mk-bridge-421834</name>
	I0926 23:53:03.447494   66389 main.go:141] libmachine: (bridge-421834) setting up store path in /home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834 ...
	I0926 23:53:03.447503   66389 main.go:141] libmachine: (bridge-421834) DBG |   <uuid>20995e88-23b6-4a61-b3dc-4476e2fed59a</uuid>
	I0926 23:53:03.447514   66389 main.go:141] libmachine: (bridge-421834) DBG |   <bridge name='virbr3' stp='on' delay='0'/>
	I0926 23:53:03.447521   66389 main.go:141] libmachine: (bridge-421834) DBG |   <mac address='52:54:00:85:bd:a6'/>
	I0926 23:53:03.447530   66389 main.go:141] libmachine: (bridge-421834) DBG |   <dns enable='no'/>
	I0926 23:53:03.447539   66389 main.go:141] libmachine: (bridge-421834) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0926 23:53:03.447565   66389 main.go:141] libmachine: (bridge-421834) building disk image from file:///home/jenkins/minikube-integration/21642-6020/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0926 23:53:03.447593   66389 main.go:141] libmachine: (bridge-421834) Downloading /home/jenkins/minikube-integration/21642-6020/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21642-6020/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I0926 23:53:03.447605   66389 main.go:141] libmachine: (bridge-421834) DBG |     <dhcp>
	I0926 23:53:03.447624   66389 main.go:141] libmachine: (bridge-421834) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0926 23:53:03.447637   66389 main.go:141] libmachine: (bridge-421834) DBG |     </dhcp>
	I0926 23:53:03.447648   66389 main.go:141] libmachine: (bridge-421834) DBG |   </ip>
	I0926 23:53:03.447661   66389 main.go:141] libmachine: (bridge-421834) DBG | </network>
	I0926 23:53:03.447670   66389 main.go:141] libmachine: (bridge-421834) DBG | 
	I0926 23:53:03.447694   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:03.447443   66416 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 23:53:03.710861   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:03.710725   66416 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/id_rsa...
	I0926 23:53:03.942057   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:03.941915   66416 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/bridge-421834.rawdisk...
	I0926 23:53:03.942095   66389 main.go:141] libmachine: (bridge-421834) DBG | Writing magic tar header
	I0926 23:53:03.942116   66389 main.go:141] libmachine: (bridge-421834) DBG | Writing SSH key tar header
	I0926 23:53:03.942190   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:03.942133   66416 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834 ...
	I0926 23:53:03.942272   66389 main.go:141] libmachine: (bridge-421834) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834
	I0926 23:53:03.942295   66389 main.go:141] libmachine: (bridge-421834) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21642-6020/.minikube/machines
	I0926 23:53:03.942313   66389 main.go:141] libmachine: (bridge-421834) setting executable bit set on /home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834 (perms=drwx------)
	I0926 23:53:03.942327   66389 main.go:141] libmachine: (bridge-421834) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 23:53:03.942337   66389 main.go:141] libmachine: (bridge-421834) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21642-6020
	I0926 23:53:03.942346   66389 main.go:141] libmachine: (bridge-421834) setting executable bit set on /home/jenkins/minikube-integration/21642-6020/.minikube/machines (perms=drwxr-xr-x)
	I0926 23:53:03.942355   66389 main.go:141] libmachine: (bridge-421834) setting executable bit set on /home/jenkins/minikube-integration/21642-6020/.minikube (perms=drwxr-xr-x)
	I0926 23:53:03.942363   66389 main.go:141] libmachine: (bridge-421834) setting executable bit set on /home/jenkins/minikube-integration/21642-6020 (perms=drwxrwxr-x)
	I0926 23:53:03.942375   66389 main.go:141] libmachine: (bridge-421834) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0926 23:53:03.942399   66389 main.go:141] libmachine: (bridge-421834) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0926 23:53:03.942411   66389 main.go:141] libmachine: (bridge-421834) DBG | checking permissions on dir: /home/jenkins
	I0926 23:53:03.942420   66389 main.go:141] libmachine: (bridge-421834) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0926 23:53:03.942432   66389 main.go:141] libmachine: (bridge-421834) DBG | checking permissions on dir: /home
	I0926 23:53:03.942446   66389 main.go:141] libmachine: (bridge-421834) DBG | skipping /home - not owner
	I0926 23:53:03.942456   66389 main.go:141] libmachine: (bridge-421834) defining domain...
	I0926 23:53:03.944054   66389 main.go:141] libmachine: (bridge-421834) defining domain using XML: 
	I0926 23:53:03.944122   66389 main.go:141] libmachine: (bridge-421834) <domain type='kvm'>
	I0926 23:53:03.944136   66389 main.go:141] libmachine: (bridge-421834)   <name>bridge-421834</name>
	I0926 23:53:03.944153   66389 main.go:141] libmachine: (bridge-421834)   <memory unit='MiB'>3072</memory>
	I0926 23:53:03.944163   66389 main.go:141] libmachine: (bridge-421834)   <vcpu>2</vcpu>
	I0926 23:53:03.944172   66389 main.go:141] libmachine: (bridge-421834)   <features>
	I0926 23:53:03.944184   66389 main.go:141] libmachine: (bridge-421834)     <acpi/>
	I0926 23:53:03.944191   66389 main.go:141] libmachine: (bridge-421834)     <apic/>
	I0926 23:53:03.944202   66389 main.go:141] libmachine: (bridge-421834)     <pae/>
	I0926 23:53:03.944209   66389 main.go:141] libmachine: (bridge-421834)   </features>
	I0926 23:53:03.944221   66389 main.go:141] libmachine: (bridge-421834)   <cpu mode='host-passthrough'>
	I0926 23:53:03.944230   66389 main.go:141] libmachine: (bridge-421834)   </cpu>
	I0926 23:53:03.944273   66389 main.go:141] libmachine: (bridge-421834)   <os>
	I0926 23:53:03.944306   66389 main.go:141] libmachine: (bridge-421834)     <type>hvm</type>
	I0926 23:53:03.944324   66389 main.go:141] libmachine: (bridge-421834)     <boot dev='cdrom'/>
	I0926 23:53:03.944333   66389 main.go:141] libmachine: (bridge-421834)     <boot dev='hd'/>
	I0926 23:53:03.944343   66389 main.go:141] libmachine: (bridge-421834)     <bootmenu enable='no'/>
	I0926 23:53:03.944351   66389 main.go:141] libmachine: (bridge-421834)   </os>
	I0926 23:53:03.944360   66389 main.go:141] libmachine: (bridge-421834)   <devices>
	I0926 23:53:03.944384   66389 main.go:141] libmachine: (bridge-421834)     <disk type='file' device='cdrom'>
	I0926 23:53:03.944403   66389 main.go:141] libmachine: (bridge-421834)       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/boot2docker.iso'/>
	I0926 23:53:03.944416   66389 main.go:141] libmachine: (bridge-421834)       <target dev='hdc' bus='scsi'/>
	I0926 23:53:03.944422   66389 main.go:141] libmachine: (bridge-421834)       <readonly/>
	I0926 23:53:03.944433   66389 main.go:141] libmachine: (bridge-421834)     </disk>
	I0926 23:53:03.944442   66389 main.go:141] libmachine: (bridge-421834)     <disk type='file' device='disk'>
	I0926 23:53:03.944455   66389 main.go:141] libmachine: (bridge-421834)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0926 23:53:03.944470   66389 main.go:141] libmachine: (bridge-421834)       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/bridge-421834.rawdisk'/>
	I0926 23:53:03.944478   66389 main.go:141] libmachine: (bridge-421834)       <target dev='hda' bus='virtio'/>
	I0926 23:53:03.944490   66389 main.go:141] libmachine: (bridge-421834)     </disk>
	I0926 23:53:03.944497   66389 main.go:141] libmachine: (bridge-421834)     <interface type='network'>
	I0926 23:53:03.944511   66389 main.go:141] libmachine: (bridge-421834)       <source network='mk-bridge-421834'/>
	I0926 23:53:03.944521   66389 main.go:141] libmachine: (bridge-421834)       <model type='virtio'/>
	I0926 23:53:03.944531   66389 main.go:141] libmachine: (bridge-421834)     </interface>
	I0926 23:53:03.944541   66389 main.go:141] libmachine: (bridge-421834)     <interface type='network'>
	I0926 23:53:03.944555   66389 main.go:141] libmachine: (bridge-421834)       <source network='default'/>
	I0926 23:53:03.944575   66389 main.go:141] libmachine: (bridge-421834)       <model type='virtio'/>
	I0926 23:53:03.944584   66389 main.go:141] libmachine: (bridge-421834)     </interface>
	I0926 23:53:03.944590   66389 main.go:141] libmachine: (bridge-421834)     <serial type='pty'>
	I0926 23:53:03.944598   66389 main.go:141] libmachine: (bridge-421834)       <target port='0'/>
	I0926 23:53:03.944604   66389 main.go:141] libmachine: (bridge-421834)     </serial>
	I0926 23:53:03.944621   66389 main.go:141] libmachine: (bridge-421834)     <console type='pty'>
	I0926 23:53:03.944627   66389 main.go:141] libmachine: (bridge-421834)       <target type='serial' port='0'/>
	I0926 23:53:03.944645   66389 main.go:141] libmachine: (bridge-421834)     </console>
	I0926 23:53:03.944652   66389 main.go:141] libmachine: (bridge-421834)     <rng model='virtio'>
	I0926 23:53:03.944677   66389 main.go:141] libmachine: (bridge-421834)       <backend model='random'>/dev/random</backend>
	I0926 23:53:03.944690   66389 main.go:141] libmachine: (bridge-421834)     </rng>
	I0926 23:53:03.944698   66389 main.go:141] libmachine: (bridge-421834)   </devices>
	I0926 23:53:03.944704   66389 main.go:141] libmachine: (bridge-421834) </domain>
	I0926 23:53:03.944712   66389 main.go:141] libmachine: (bridge-421834) 
	I0926 23:53:03.950476   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:42:09:77 in network default
	I0926 23:53:03.951187   66389 main.go:141] libmachine: (bridge-421834) starting domain...
	I0926 23:53:03.951211   66389 main.go:141] libmachine: (bridge-421834) ensuring networks are active...
	I0926 23:53:03.951222   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:03.952135   66389 main.go:141] libmachine: (bridge-421834) Ensuring network default is active
	I0926 23:53:03.952553   66389 main.go:141] libmachine: (bridge-421834) Ensuring network mk-bridge-421834 is active
	I0926 23:53:03.953289   66389 main.go:141] libmachine: (bridge-421834) getting domain XML...
	I0926 23:53:03.954482   66389 main.go:141] libmachine: (bridge-421834) DBG | starting domain XML:
	I0926 23:53:03.954505   66389 main.go:141] libmachine: (bridge-421834) DBG | <domain type='kvm'>
	I0926 23:53:03.954516   66389 main.go:141] libmachine: (bridge-421834) DBG |   <name>bridge-421834</name>
	I0926 23:53:03.954524   66389 main.go:141] libmachine: (bridge-421834) DBG |   <uuid>8d3b42b7-f84e-4eb2-ada2-e26070399929</uuid>
	I0926 23:53:03.954541   66389 main.go:141] libmachine: (bridge-421834) DBG |   <memory unit='KiB'>3145728</memory>
	I0926 23:53:03.954550   66389 main.go:141] libmachine: (bridge-421834) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I0926 23:53:03.954562   66389 main.go:141] libmachine: (bridge-421834) DBG |   <vcpu placement='static'>2</vcpu>
	I0926 23:53:03.954586   66389 main.go:141] libmachine: (bridge-421834) DBG |   <os>
	I0926 23:53:03.954619   66389 main.go:141] libmachine: (bridge-421834) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0926 23:53:03.954670   66389 main.go:141] libmachine: (bridge-421834) DBG |     <boot dev='cdrom'/>
	I0926 23:53:03.954686   66389 main.go:141] libmachine: (bridge-421834) DBG |     <boot dev='hd'/>
	I0926 23:53:03.954701   66389 main.go:141] libmachine: (bridge-421834) DBG |     <bootmenu enable='no'/>
	I0926 23:53:03.954713   66389 main.go:141] libmachine: (bridge-421834) DBG |   </os>
	I0926 23:53:03.954723   66389 main.go:141] libmachine: (bridge-421834) DBG |   <features>
	I0926 23:53:03.954731   66389 main.go:141] libmachine: (bridge-421834) DBG |     <acpi/>
	I0926 23:53:03.954740   66389 main.go:141] libmachine: (bridge-421834) DBG |     <apic/>
	I0926 23:53:03.954757   66389 main.go:141] libmachine: (bridge-421834) DBG |     <pae/>
	I0926 23:53:03.954773   66389 main.go:141] libmachine: (bridge-421834) DBG |   </features>
	I0926 23:53:03.954783   66389 main.go:141] libmachine: (bridge-421834) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0926 23:53:03.954794   66389 main.go:141] libmachine: (bridge-421834) DBG |   <clock offset='utc'/>
	I0926 23:53:03.954809   66389 main.go:141] libmachine: (bridge-421834) DBG |   <on_poweroff>destroy</on_poweroff>
	I0926 23:53:03.954819   66389 main.go:141] libmachine: (bridge-421834) DBG |   <on_reboot>restart</on_reboot>
	I0926 23:53:03.954840   66389 main.go:141] libmachine: (bridge-421834) DBG |   <on_crash>destroy</on_crash>
	I0926 23:53:03.954848   66389 main.go:141] libmachine: (bridge-421834) DBG |   <devices>
	I0926 23:53:03.954861   66389 main.go:141] libmachine: (bridge-421834) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0926 23:53:03.954874   66389 main.go:141] libmachine: (bridge-421834) DBG |     <disk type='file' device='cdrom'>
	I0926 23:53:03.954900   66389 main.go:141] libmachine: (bridge-421834) DBG |       <driver name='qemu' type='raw'/>
	I0926 23:53:03.954942   66389 main.go:141] libmachine: (bridge-421834) DBG |       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/boot2docker.iso'/>
	I0926 23:53:03.954955   66389 main.go:141] libmachine: (bridge-421834) DBG |       <target dev='hdc' bus='scsi'/>
	I0926 23:53:03.954962   66389 main.go:141] libmachine: (bridge-421834) DBG |       <readonly/>
	I0926 23:53:03.954976   66389 main.go:141] libmachine: (bridge-421834) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0926 23:53:03.954986   66389 main.go:141] libmachine: (bridge-421834) DBG |     </disk>
	I0926 23:53:03.954995   66389 main.go:141] libmachine: (bridge-421834) DBG |     <disk type='file' device='disk'>
	I0926 23:53:03.955007   66389 main.go:141] libmachine: (bridge-421834) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0926 23:53:03.955027   66389 main.go:141] libmachine: (bridge-421834) DBG |       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/bridge-421834.rawdisk'/>
	I0926 23:53:03.955043   66389 main.go:141] libmachine: (bridge-421834) DBG |       <target dev='hda' bus='virtio'/>
	I0926 23:53:03.955064   66389 main.go:141] libmachine: (bridge-421834) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0926 23:53:03.955092   66389 main.go:141] libmachine: (bridge-421834) DBG |     </disk>
	I0926 23:53:03.955107   66389 main.go:141] libmachine: (bridge-421834) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0926 23:53:03.955120   66389 main.go:141] libmachine: (bridge-421834) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0926 23:53:03.955132   66389 main.go:141] libmachine: (bridge-421834) DBG |     </controller>
	I0926 23:53:03.955144   66389 main.go:141] libmachine: (bridge-421834) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0926 23:53:03.955156   66389 main.go:141] libmachine: (bridge-421834) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0926 23:53:03.955165   66389 main.go:141] libmachine: (bridge-421834) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0926 23:53:03.955176   66389 main.go:141] libmachine: (bridge-421834) DBG |     </controller>
	I0926 23:53:03.955183   66389 main.go:141] libmachine: (bridge-421834) DBG |     <interface type='network'>
	I0926 23:53:03.955196   66389 main.go:141] libmachine: (bridge-421834) DBG |       <mac address='52:54:00:35:cf:e4'/>
	I0926 23:53:03.955211   66389 main.go:141] libmachine: (bridge-421834) DBG |       <source network='mk-bridge-421834'/>
	I0926 23:53:03.955222   66389 main.go:141] libmachine: (bridge-421834) DBG |       <model type='virtio'/>
	I0926 23:53:03.955234   66389 main.go:141] libmachine: (bridge-421834) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0926 23:53:03.955245   66389 main.go:141] libmachine: (bridge-421834) DBG |     </interface>
	I0926 23:53:03.955252   66389 main.go:141] libmachine: (bridge-421834) DBG |     <interface type='network'>
	I0926 23:53:03.955261   66389 main.go:141] libmachine: (bridge-421834) DBG |       <mac address='52:54:00:42:09:77'/>
	I0926 23:53:03.955269   66389 main.go:141] libmachine: (bridge-421834) DBG |       <source network='default'/>
	I0926 23:53:03.955281   66389 main.go:141] libmachine: (bridge-421834) DBG |       <model type='virtio'/>
	I0926 23:53:03.955294   66389 main.go:141] libmachine: (bridge-421834) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0926 23:53:03.955317   66389 main.go:141] libmachine: (bridge-421834) DBG |     </interface>
	I0926 23:53:03.955327   66389 main.go:141] libmachine: (bridge-421834) DBG |     <serial type='pty'>
	I0926 23:53:03.955337   66389 main.go:141] libmachine: (bridge-421834) DBG |       <target type='isa-serial' port='0'>
	I0926 23:53:03.955363   66389 main.go:141] libmachine: (bridge-421834) DBG |         <model name='isa-serial'/>
	I0926 23:53:03.955379   66389 main.go:141] libmachine: (bridge-421834) DBG |       </target>
	I0926 23:53:03.955387   66389 main.go:141] libmachine: (bridge-421834) DBG |     </serial>
	I0926 23:53:03.955402   66389 main.go:141] libmachine: (bridge-421834) DBG |     <console type='pty'>
	I0926 23:53:03.955412   66389 main.go:141] libmachine: (bridge-421834) DBG |       <target type='serial' port='0'/>
	I0926 23:53:03.955421   66389 main.go:141] libmachine: (bridge-421834) DBG |     </console>
	I0926 23:53:03.955430   66389 main.go:141] libmachine: (bridge-421834) DBG |     <input type='mouse' bus='ps2'/>
	I0926 23:53:03.955440   66389 main.go:141] libmachine: (bridge-421834) DBG |     <input type='keyboard' bus='ps2'/>
	I0926 23:53:03.955448   66389 main.go:141] libmachine: (bridge-421834) DBG |     <audio id='1' type='none'/>
	I0926 23:53:03.955459   66389 main.go:141] libmachine: (bridge-421834) DBG |     <memballoon model='virtio'>
	I0926 23:53:03.955482   66389 main.go:141] libmachine: (bridge-421834) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0926 23:53:03.955498   66389 main.go:141] libmachine: (bridge-421834) DBG |     </memballoon>
	I0926 23:53:03.955514   66389 main.go:141] libmachine: (bridge-421834) DBG |     <rng model='virtio'>
	I0926 23:53:03.955525   66389 main.go:141] libmachine: (bridge-421834) DBG |       <backend model='random'>/dev/random</backend>
	I0926 23:53:03.955539   66389 main.go:141] libmachine: (bridge-421834) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0926 23:53:03.955561   66389 main.go:141] libmachine: (bridge-421834) DBG |     </rng>
	I0926 23:53:03.955581   66389 main.go:141] libmachine: (bridge-421834) DBG |   </devices>
	I0926 23:53:03.955599   66389 main.go:141] libmachine: (bridge-421834) DBG | </domain>
	I0926 23:53:03.955609   66389 main.go:141] libmachine: (bridge-421834) DBG | 
	I0926 23:53:05.413564   66389 main.go:141] libmachine: (bridge-421834) waiting for domain to start...
	I0926 23:53:05.415112   66389 main.go:141] libmachine: (bridge-421834) domain is now running
	I0926 23:53:05.415138   66389 main.go:141] libmachine: (bridge-421834) waiting for IP...
	I0926 23:53:05.416102   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:05.416795   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:05.416811   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:05.417206   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:05.417274   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:05.417222   66416 retry.go:31] will retry after 242.746698ms: waiting for domain to come up
	I0926 23:53:05.661796   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:05.662661   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:05.662693   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:05.663056   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:05.663085   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:05.663033   66416 retry.go:31] will retry after 310.046377ms: waiting for domain to come up
	I0926 23:53:05.974985   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:05.975952   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:05.975977   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:05.976452   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:05.976480   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:05.976421   66416 retry.go:31] will retry after 380.53988ms: waiting for domain to come up
	I0926 23:53:06.359242   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:06.359992   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:06.360015   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:06.360452   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:06.360483   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:06.360439   66416 retry.go:31] will retry after 379.942424ms: waiting for domain to come up
	I0926 23:53:06.742493   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:06.743323   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:06.743354   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:06.743877   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:06.743937   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:06.743882   66416 retry.go:31] will retry after 473.943109ms: waiting for domain to come up
	I0926 23:53:07.219641   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:07.220455   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:07.220483   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:07.220879   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:07.220907   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:07.220806   66416 retry.go:31] will retry after 830.680185ms: waiting for domain to come up
	I0926 23:53:08.053128   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:08.053889   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:08.053917   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:08.054379   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:08.054406   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:08.054318   66416 retry.go:31] will retry after 1.082514621s: waiting for domain to come up
	I0926 23:53:09.910591   64230 kubeadm.go:318] [init] Using Kubernetes version: v1.34.0
	I0926 23:53:09.910672   64230 kubeadm.go:318] [preflight] Running pre-flight checks
	I0926 23:53:09.910770   64230 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 23:53:09.910917   64230 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 23:53:09.911047   64230 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0926 23:53:09.911161   64230 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 23:53:10.016888   64230 out.go:252]   - Generating certificates and keys ...
	I0926 23:53:10.017026   64230 kubeadm.go:318] [certs] Using existing ca certificate authority
	I0926 23:53:10.017121   64230 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I0926 23:53:10.017222   64230 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0926 23:53:10.017304   64230 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I0926 23:53:10.017390   64230 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I0926 23:53:10.017461   64230 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I0926 23:53:10.017545   64230 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I0926 23:53:10.017756   64230 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [flannel-421834 localhost] and IPs [192.168.50.130 127.0.0.1 ::1]
	I0926 23:53:10.017878   64230 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I0926 23:53:10.018056   64230 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [flannel-421834 localhost] and IPs [192.168.50.130 127.0.0.1 ::1]
	I0926 23:53:10.018164   64230 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0926 23:53:10.018257   64230 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I0926 23:53:10.018321   64230 kubeadm.go:318] [certs] Generating "sa" key and public key
	I0926 23:53:10.018413   64230 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 23:53:10.018491   64230 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 23:53:10.018576   64230 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 23:53:10.018685   64230 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 23:53:10.018787   64230 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 23:53:10.018889   64230 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 23:53:10.019010   64230 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 23:53:10.019109   64230 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 23:53:10.082116   64230 out.go:252]   - Booting up control plane ...
	I0926 23:53:10.082252   64230 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 23:53:10.082387   64230 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 23:53:10.082533   64230 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 23:53:10.082754   64230 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 23:53:10.082923   64230 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0926 23:53:10.083091   64230 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0926 23:53:10.083214   64230 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 23:53:10.083284   64230 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I0926 23:53:10.083469   64230 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0926 23:53:10.083630   64230 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0926 23:53:10.083723   64230 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001292707s
	I0926 23:53:10.083875   64230 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0926 23:53:10.083995   64230 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.50.130:8443/livez
	I0926 23:53:10.084115   64230 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0926 23:53:10.084210   64230 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0926 23:53:10.084320   64230 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.240324477s
	I0926 23:53:10.084423   64230 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.837365507s
	I0926 23:53:10.084487   64230 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.004320011s
	I0926 23:53:10.084614   64230 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 23:53:10.084807   64230 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 23:53:10.084919   64230 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 23:53:10.085160   64230 kubeadm.go:318] [mark-control-plane] Marking the node flannel-421834 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 23:53:10.085234   64230 kubeadm.go:318] [bootstrap-token] Using token: 4os0st.sh1dwg769x37x84s
	I0926 23:53:10.116755   64230 out.go:252]   - Configuring RBAC rules ...
	I0926 23:53:10.116963   64230 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 23:53:10.117080   64230 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 23:53:10.117276   64230 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 23:53:10.117478   64230 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 23:53:10.117640   64230 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 23:53:10.117762   64230 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 23:53:10.117956   64230 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 23:53:10.118030   64230 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I0926 23:53:10.118106   64230 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I0926 23:53:10.118116   64230 kubeadm.go:318] 
	I0926 23:53:10.118205   64230 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I0926 23:53:10.118214   64230 kubeadm.go:318] 
	I0926 23:53:10.118345   64230 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I0926 23:53:10.118360   64230 kubeadm.go:318] 
	I0926 23:53:10.118396   64230 kubeadm.go:318]   mkdir -p $HOME/.kube
	I0926 23:53:10.118489   64230 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 23:53:10.118561   64230 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 23:53:10.118570   64230 kubeadm.go:318] 
	I0926 23:53:10.118658   64230 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I0926 23:53:10.118666   64230 kubeadm.go:318] 
	I0926 23:53:10.118732   64230 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 23:53:10.118741   64230 kubeadm.go:318] 
	I0926 23:53:10.118844   64230 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I0926 23:53:10.118952   64230 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 23:53:10.119041   64230 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 23:53:10.119048   64230 kubeadm.go:318] 
	I0926 23:53:10.119242   64230 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 23:53:10.119372   64230 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I0926 23:53:10.119389   64230 kubeadm.go:318] 
	I0926 23:53:10.119523   64230 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 4os0st.sh1dwg769x37x84s \
	I0926 23:53:10.119693   64230 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1bc065dc0287f5108511f75d77232285046ef3d632aca3b6b4eb77abcecaa58 \
	I0926 23:53:10.119739   64230 kubeadm.go:318] 	--control-plane 
	I0926 23:53:10.119762   64230 kubeadm.go:318] 
	I0926 23:53:10.119919   64230 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I0926 23:53:10.119936   64230 kubeadm.go:318] 
	I0926 23:53:10.120053   64230 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 4os0st.sh1dwg769x37x84s \
	I0926 23:53:10.120167   64230 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1bc065dc0287f5108511f75d77232285046ef3d632aca3b6b4eb77abcecaa58 
	I0926 23:53:10.120181   64230 cni.go:84] Creating CNI manager for "flannel"
	I0926 23:53:10.178904   64230 out.go:179] * Configuring Flannel (Container Networking Interface) ...
	W0926 23:53:09.152106   62447 pod_ready.go:104] pod "coredns-66bc5c9577-b2hgd" is not "Ready", error: <nil>
	W0926 23:53:11.649252   62447 pod_ready.go:104] pod "coredns-66bc5c9577-b2hgd" is not "Ready", error: <nil>
	I0926 23:53:09.138210   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:09.138974   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:09.139002   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:09.139401   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:09.139457   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:09.139402   66416 retry.go:31] will retry after 1.24975676s: waiting for domain to come up
	I0926 23:53:10.391406   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:10.392295   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:10.392325   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:10.392771   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:10.392859   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:10.392767   66416 retry.go:31] will retry after 1.39046487s: waiting for domain to come up
	I0926 23:53:11.785124   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:11.785782   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:11.785805   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:11.786195   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:11.786220   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:11.786165   66416 retry.go:31] will retry after 1.841603756s: waiting for domain to come up
	I0926 23:53:10.225761   64230 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0926 23:53:10.238860   64230 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0926 23:53:10.238885   64230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4415 bytes)
	I0926 23:53:10.274150   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0926 23:53:10.803561   64230 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 23:53:10.803617   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:10.803735   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-421834 minikube.k8s.io/updated_at=2025_09_26T23_53_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47 minikube.k8s.io/name=flannel-421834 minikube.k8s.io/primary=true
	I0926 23:53:10.843210   64230 ops.go:34] apiserver oom_adj: -16
	I0926 23:53:10.944365   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:11.445111   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:11.944619   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:12.444654   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:12.945159   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:13.445131   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:13.944746   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:14.444939   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:13.650389   62447 pod_ready.go:94] pod "coredns-66bc5c9577-b2hgd" is "Ready"
	I0926 23:53:13.650411   62447 pod_ready.go:86] duration metric: took 36.508072379s for pod "coredns-66bc5c9577-b2hgd" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:13.650421   62447 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rwz5t" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:13.653233   62447 pod_ready.go:99] pod "coredns-66bc5c9577-rwz5t" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-rwz5t" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-rwz5t" not found
	I0926 23:53:13.653253   62447 pod_ready.go:86] duration metric: took 2.826491ms for pod "coredns-66bc5c9577-rwz5t" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:13.657377   62447 pod_ready.go:83] waiting for pod "etcd-enable-default-cni-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:13.664402   62447 pod_ready.go:94] pod "etcd-enable-default-cni-421834" is "Ready"
	I0926 23:53:13.664424   62447 pod_ready.go:86] duration metric: took 7.018923ms for pod "etcd-enable-default-cni-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:13.667333   62447 pod_ready.go:83] waiting for pod "kube-apiserver-enable-default-cni-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:13.675804   62447 pod_ready.go:94] pod "kube-apiserver-enable-default-cni-421834" is "Ready"
	I0926 23:53:13.675864   62447 pod_ready.go:86] duration metric: took 8.503966ms for pod "kube-apiserver-enable-default-cni-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:13.678769   62447 pod_ready.go:83] waiting for pod "kube-controller-manager-enable-default-cni-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:14.047842   62447 pod_ready.go:94] pod "kube-controller-manager-enable-default-cni-421834" is "Ready"
	I0926 23:53:14.047875   62447 pod_ready.go:86] duration metric: took 369.075451ms for pod "kube-controller-manager-enable-default-cni-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:14.247491   62447 pod_ready.go:83] waiting for pod "kube-proxy-qkshr" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:14.646615   62447 pod_ready.go:94] pod "kube-proxy-qkshr" is "Ready"
	I0926 23:53:14.646653   62447 pod_ready.go:86] duration metric: took 399.110961ms for pod "kube-proxy-qkshr" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:14.847851   62447 pod_ready.go:83] waiting for pod "kube-scheduler-enable-default-cni-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:15.248070   62447 pod_ready.go:94] pod "kube-scheduler-enable-default-cni-421834" is "Ready"
	I0926 23:53:15.248112   62447 pod_ready.go:86] duration metric: took 400.223954ms for pod "kube-scheduler-enable-default-cni-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:15.248128   62447 pod_ready.go:40] duration metric: took 38.116640819s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:53:15.309709   62447 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 23:53:15.312939   62447 out.go:179] * Done! kubectl is now configured to use "enable-default-cni-421834" cluster and "default" namespace by default
	I0926 23:53:14.944969   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:15.445270   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:15.713410   64230 kubeadm.go:1113] duration metric: took 4.909853439s to wait for elevateKubeSystemPrivileges
	I0926 23:53:15.713471   64230 kubeadm.go:402] duration metric: took 20.090602703s to StartCluster
	I0926 23:53:15.713496   64230 settings.go:142] acquiring lock: {Name:mk8a46d5a99d51096f5a73696c8b5f570ce357f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:15.713612   64230 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 23:53:15.716100   64230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/kubeconfig: {Name:mkc92bf76d8ba21d0a2b0bb28107401b61549063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:15.716437   64230 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.130 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 23:53:15.716586   64230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0926 23:53:15.716880   64230 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 23:53:15.716975   64230 addons.go:69] Setting storage-provisioner=true in profile "flannel-421834"
	I0926 23:53:15.717000   64230 addons.go:238] Setting addon storage-provisioner=true in "flannel-421834"
	I0926 23:53:15.717033   64230 host.go:66] Checking if "flannel-421834" exists ...
	I0926 23:53:15.717209   64230 addons.go:69] Setting default-storageclass=true in profile "flannel-421834"
	I0926 23:53:15.717233   64230 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-421834"
	I0926 23:53:15.717547   64230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:15.717593   64230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:15.717625   64230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:15.717668   64230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:15.717748   64230 config.go:182] Loaded profile config "flannel-421834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:53:15.720060   64230 out.go:179] * Verifying Kubernetes components...
	I0926 23:53:15.722845   64230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:53:15.737104   64230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44729
	I0926 23:53:15.737620   64230 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:15.738112   64230 main.go:141] libmachine: Using API Version  1
	I0926 23:53:15.738134   64230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:15.738208   64230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41387
	I0926 23:53:15.738753   64230 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:15.738875   64230 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:15.739288   64230 main.go:141] libmachine: (flannel-421834) Calling .GetState
	I0926 23:53:15.739292   64230 main.go:141] libmachine: Using API Version  1
	I0926 23:53:15.739358   64230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:15.739754   64230 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:15.740571   64230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:15.740607   64230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:15.744688   64230 addons.go:238] Setting addon default-storageclass=true in "flannel-421834"
	I0926 23:53:15.744734   64230 host.go:66] Checking if "flannel-421834" exists ...
	I0926 23:53:15.745116   64230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:15.745169   64230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:15.761606   64230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43365
	I0926 23:53:15.763301   64230 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:15.765135   64230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38581
	I0926 23:53:15.765224   64230 main.go:141] libmachine: Using API Version  1
	I0926 23:53:15.765414   64230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:15.767119   64230 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:15.767275   64230 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:15.767946   64230 main.go:141] libmachine: Using API Version  1
	I0926 23:53:15.768025   64230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:15.768424   64230 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:15.769199   64230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:15.769273   64230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:15.769897   64230 main.go:141] libmachine: (flannel-421834) Calling .GetState
	I0926 23:53:15.774783   64230 main.go:141] libmachine: (flannel-421834) Calling .DriverName
	I0926 23:53:15.776927   64230 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 23:53:13.630039   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:13.630694   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:13.630730   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:13.631138   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:13.631162   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:13.631106   66416 retry.go:31] will retry after 2.294192316s: waiting for domain to come up
	I0926 23:53:15.929303   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:15.930494   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:15.930792   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:15.931369   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:15.931586   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:15.931507   66416 retry.go:31] will retry after 3.412894975s: waiting for domain to come up
	I0926 23:53:15.779940   64230 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:53:15.779963   64230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 23:53:15.779989   64230 main.go:141] libmachine: (flannel-421834) Calling .GetSSHHostname
	I0926 23:53:15.785944   64230 main.go:141] libmachine: (flannel-421834) DBG | domain flannel-421834 has defined MAC address 52:54:00:bc:65:4f in network mk-flannel-421834
	I0926 23:53:15.786725   64230 main.go:141] libmachine: (flannel-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:65:4f", ip: ""} in network mk-flannel-421834: {Iface:virbr2 ExpiryTime:2025-09-27 00:52:43 +0000 UTC Type:0 Mac:52:54:00:bc:65:4f Iaid: IPaddr:192.168.50.130 Prefix:24 Hostname:flannel-421834 Clientid:01:52:54:00:bc:65:4f}
	I0926 23:53:15.787141   64230 main.go:141] libmachine: (flannel-421834) DBG | domain flannel-421834 has defined IP address 192.168.50.130 and MAC address 52:54:00:bc:65:4f in network mk-flannel-421834
	I0926 23:53:15.787868   64230 main.go:141] libmachine: (flannel-421834) Calling .GetSSHPort
	I0926 23:53:15.788630   64230 main.go:141] libmachine: (flannel-421834) Calling .GetSSHKeyPath
	I0926 23:53:15.789031   64230 main.go:141] libmachine: (flannel-421834) Calling .GetSSHUsername
	I0926 23:53:15.789388   64230 sshutil.go:53] new ssh client: &{IP:192.168.50.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/flannel-421834/id_rsa Username:docker}
	I0926 23:53:15.792087   64230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42235
	I0926 23:53:15.792955   64230 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:15.793812   64230 main.go:141] libmachine: Using API Version  1
	I0926 23:53:15.793870   64230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:15.794398   64230 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:15.794684   64230 main.go:141] libmachine: (flannel-421834) Calling .GetState
	I0926 23:53:15.798132   64230 main.go:141] libmachine: (flannel-421834) Calling .DriverName
	I0926 23:53:15.798474   64230 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 23:53:15.798509   64230 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 23:53:15.798541   64230 main.go:141] libmachine: (flannel-421834) Calling .GetSSHHostname
	I0926 23:53:15.804045   64230 main.go:141] libmachine: (flannel-421834) DBG | domain flannel-421834 has defined MAC address 52:54:00:bc:65:4f in network mk-flannel-421834
	I0926 23:53:15.804641   64230 main.go:141] libmachine: (flannel-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:65:4f", ip: ""} in network mk-flannel-421834: {Iface:virbr2 ExpiryTime:2025-09-27 00:52:43 +0000 UTC Type:0 Mac:52:54:00:bc:65:4f Iaid: IPaddr:192.168.50.130 Prefix:24 Hostname:flannel-421834 Clientid:01:52:54:00:bc:65:4f}
	I0926 23:53:15.804720   64230 main.go:141] libmachine: (flannel-421834) DBG | domain flannel-421834 has defined IP address 192.168.50.130 and MAC address 52:54:00:bc:65:4f in network mk-flannel-421834
	I0926 23:53:15.804912   64230 main.go:141] libmachine: (flannel-421834) Calling .GetSSHPort
	I0926 23:53:15.805176   64230 main.go:141] libmachine: (flannel-421834) Calling .GetSSHKeyPath
	I0926 23:53:15.805405   64230 main.go:141] libmachine: (flannel-421834) Calling .GetSSHUsername
	I0926 23:53:15.805620   64230 sshutil.go:53] new ssh client: &{IP:192.168.50.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/flannel-421834/id_rsa Username:docker}
	I0926 23:53:16.209327   64230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0926 23:53:16.209525   64230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:53:16.355311   64230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 23:53:16.482372   64230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:53:17.533563   64230 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.323987139s)
	I0926 23:53:17.533595   64230 start.go:976] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0926 23:53:17.535328   64230 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.325641904s)
	I0926 23:53:17.535910   64230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.180559917s)
	I0926 23:53:17.535948   64230 main.go:141] libmachine: Making call to close driver server
	I0926 23:53:17.535959   64230 main.go:141] libmachine: (flannel-421834) Calling .Close
	I0926 23:53:17.536330   64230 main.go:141] libmachine: (flannel-421834) DBG | Closing plugin on server side
	I0926 23:53:17.536368   64230 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:53:17.536375   64230 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:53:17.536384   64230 main.go:141] libmachine: Making call to close driver server
	I0926 23:53:17.536390   64230 main.go:141] libmachine: (flannel-421834) Calling .Close
	I0926 23:53:17.536981   64230 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:53:17.537258   64230 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:53:17.537205   64230 main.go:141] libmachine: (flannel-421834) DBG | Closing plugin on server side
	I0926 23:53:17.540773   64230 node_ready.go:35] waiting up to 15m0s for node "flannel-421834" to be "Ready" ...
	I0926 23:53:17.596474   64230 main.go:141] libmachine: Making call to close driver server
	I0926 23:53:17.596502   64230 main.go:141] libmachine: (flannel-421834) Calling .Close
	I0926 23:53:17.596784   64230 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:53:17.596804   64230 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:53:18.046233   64230 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-421834" context rescaled to 1 replicas
	I0926 23:53:18.306402   64230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.823983228s)
	I0926 23:53:18.306465   64230 main.go:141] libmachine: Making call to close driver server
	I0926 23:53:18.306476   64230 main.go:141] libmachine: (flannel-421834) Calling .Close
	I0926 23:53:18.307068   64230 main.go:141] libmachine: (flannel-421834) DBG | Closing plugin on server side
	I0926 23:53:18.307118   64230 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:53:18.307126   64230 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:53:18.307134   64230 main.go:141] libmachine: Making call to close driver server
	I0926 23:53:18.307143   64230 main.go:141] libmachine: (flannel-421834) Calling .Close
	I0926 23:53:18.307561   64230 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:53:18.307578   64230 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:53:18.307594   64230 main.go:141] libmachine: (flannel-421834) DBG | Closing plugin on server side
	I0926 23:53:18.311866   64230 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0926 23:53:18.313253   64230 addons.go:514] duration metric: took 2.596384019s for enable addons: enabled=[default-storageclass storage-provisioner]
	W0926 23:53:19.548935   64230 node_ready.go:57] node "flannel-421834" has "Ready":"False" status (will retry)
	I0926 23:53:19.346311   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:19.347330   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:19.347440   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:19.348173   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:19.348417   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:19.348233   66416 retry.go:31] will retry after 3.007983737s: waiting for domain to come up
	I0926 23:53:22.360710   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:22.361659   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has current primary IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:22.361687   66389 main.go:141] libmachine: (bridge-421834) found domain IP: 192.168.61.22
	I0926 23:53:22.361701   66389 main.go:141] libmachine: (bridge-421834) reserving static IP address...
	I0926 23:53:22.362248   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find host DHCP lease matching {name: "bridge-421834", mac: "52:54:00:35:cf:e4", ip: "192.168.61.22"} in network mk-bridge-421834
	I0926 23:53:22.590911   66389 main.go:141] libmachine: (bridge-421834) reserved static IP address 192.168.61.22 for domain bridge-421834
	I0926 23:53:22.590941   66389 main.go:141] libmachine: (bridge-421834) DBG | Getting to WaitForSSH function...
	I0926 23:53:22.590952   66389 main.go:141] libmachine: (bridge-421834) waiting for SSH...
	I0926 23:53:22.594463   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:22.594998   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:minikube Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:22.595025   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:22.595207   66389 main.go:141] libmachine: (bridge-421834) DBG | Using SSH client type: external
	I0926 23:53:22.595229   66389 main.go:141] libmachine: (bridge-421834) DBG | Using SSH private key: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/id_rsa (-rw-------)
	I0926 23:53:22.595272   66389 main.go:141] libmachine: (bridge-421834) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0926 23:53:22.595285   66389 main.go:141] libmachine: (bridge-421834) DBG | About to run SSH command:
	I0926 23:53:22.595300   66389 main.go:141] libmachine: (bridge-421834) DBG | exit 0
	I0926 23:53:22.728475   66389 main.go:141] libmachine: (bridge-421834) DBG | SSH cmd err, output: <nil>: 
	I0926 23:53:22.728881   66389 main.go:141] libmachine: (bridge-421834) domain creation complete
	I0926 23:53:22.729358   66389 main.go:141] libmachine: (bridge-421834) Calling .GetConfigRaw
	I0926 23:53:22.730073   66389 main.go:141] libmachine: (bridge-421834) Calling .DriverName
	I0926 23:53:22.730313   66389 main.go:141] libmachine: (bridge-421834) Calling .DriverName
	I0926 23:53:22.730511   66389 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0926 23:53:22.730541   66389 main.go:141] libmachine: (bridge-421834) Calling .GetState
	I0926 23:53:22.731978   66389 main.go:141] libmachine: Detecting operating system of created instance...
	I0926 23:53:22.731994   66389 main.go:141] libmachine: Waiting for SSH to be available...
	I0926 23:53:22.732002   66389 main.go:141] libmachine: Getting to WaitForSSH function...
	I0926 23:53:22.732008   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:22.735233   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:22.735763   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:22.735796   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:22.736064   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:22.736260   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:22.736421   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:22.736554   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:22.736728   66389 main.go:141] libmachine: Using SSH client type: native
	I0926 23:53:22.737005   66389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.22 22 <nil> <nil>}
	I0926 23:53:22.737022   66389 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0926 23:53:22.846038   66389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 23:53:22.846083   66389 main.go:141] libmachine: Detecting the provisioner...
	I0926 23:53:22.846095   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:22.850385   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:22.850884   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:22.850919   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:22.851234   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:22.851469   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:22.851697   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:22.851893   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:22.852114   66389 main.go:141] libmachine: Using SSH client type: native
	I0926 23:53:22.852417   66389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.22 22 <nil> <nil>}
	I0926 23:53:22.852432   66389 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0926 23:53:22.966516   66389 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0926 23:53:22.966639   66389 main.go:141] libmachine: found compatible host: buildroot
	I0926 23:53:22.966657   66389 main.go:141] libmachine: Provisioning with buildroot...
	I0926 23:53:22.966668   66389 main.go:141] libmachine: (bridge-421834) Calling .GetMachineName
	I0926 23:53:22.966970   66389 buildroot.go:166] provisioning hostname "bridge-421834"
	I0926 23:53:22.966999   66389 main.go:141] libmachine: (bridge-421834) Calling .GetMachineName
	I0926 23:53:22.967205   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:22.970717   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:22.971216   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:22.971246   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:22.971437   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:22.971670   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:22.971893   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:22.972083   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:22.972266   66389 main.go:141] libmachine: Using SSH client type: native
	I0926 23:53:22.972582   66389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.22 22 <nil> <nil>}
	I0926 23:53:22.972602   66389 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-421834 && echo "bridge-421834" | sudo tee /etc/hostname
	I0926 23:53:23.104989   66389 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-421834
	
	I0926 23:53:23.105021   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:23.108787   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:23.109198   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:23.109230   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:23.109436   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:23.109665   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:23.109883   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:23.110062   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:23.110281   66389 main.go:141] libmachine: Using SSH client type: native
	I0926 23:53:23.110587   66389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.22 22 <nil> <nil>}
	I0926 23:53:23.110609   66389 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-421834' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-421834/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-421834' | sudo tee -a /etc/hosts; 
				fi
			fi
	W0926 23:53:22.045629   64230 node_ready.go:57] node "flannel-421834" has "Ready":"False" status (will retry)
	I0926 23:53:22.545184   64230 node_ready.go:49] node "flannel-421834" is "Ready"
	I0926 23:53:22.545213   64230 node_ready.go:38] duration metric: took 5.004290153s for node "flannel-421834" to be "Ready" ...
	I0926 23:53:22.545227   64230 api_server.go:52] waiting for apiserver process to appear ...
	I0926 23:53:22.545288   64230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:53:22.573269   64230 api_server.go:72] duration metric: took 6.856787423s to wait for apiserver process to appear ...
	I0926 23:53:22.573298   64230 api_server.go:88] waiting for apiserver healthz status ...
	I0926 23:53:22.573313   64230 api_server.go:253] Checking apiserver healthz at https://192.168.50.130:8443/healthz ...
	I0926 23:53:22.578813   64230 api_server.go:279] https://192.168.50.130:8443/healthz returned 200:
	ok
	I0926 23:53:22.580600   64230 api_server.go:141] control plane version: v1.34.0
	I0926 23:53:22.580639   64230 api_server.go:131] duration metric: took 7.325266ms to wait for apiserver health ...
	I0926 23:53:22.580650   64230 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 23:53:22.585382   64230 system_pods.go:59] 7 kube-system pods found
	I0926 23:53:22.585426   64230 system_pods.go:61] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:22.585434   64230 system_pods.go:61] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:53:22.585443   64230 system_pods.go:61] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:22.585449   64230 system_pods.go:61] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:22.585455   64230 system_pods.go:61] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:22.585459   64230 system_pods.go:61] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:53:22.585469   64230 system_pods.go:61] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:53:22.585476   64230 system_pods.go:74] duration metric: took 4.819502ms to wait for pod list to return data ...
	I0926 23:53:22.585486   64230 default_sa.go:34] waiting for default service account to be created ...
	I0926 23:53:22.588921   64230 default_sa.go:45] found service account: "default"
	I0926 23:53:22.588950   64230 default_sa.go:55] duration metric: took 3.45642ms for default service account to be created ...
	I0926 23:53:22.588960   64230 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 23:53:22.599173   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:22.599211   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:22.599232   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:53:22.599242   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:22.599250   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:22.599256   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:22.599266   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:53:22.599278   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:53:22.599331   64230 retry.go:31] will retry after 266.786551ms: missing components: kube-dns
	I0926 23:53:22.898319   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:22.898355   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:22.898361   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:22.898372   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:22.898377   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:22.898382   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:22.898418   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:53:22.898435   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:53:22.898459   64230 retry.go:31] will retry after 370.047017ms: missing components: kube-dns
	I0926 23:53:23.284233   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:23.284283   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:23.284294   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:23.284304   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:23.284321   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:23.284328   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:23.284333   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:23.284342   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:53:23.284366   64230 retry.go:31] will retry after 338.61988ms: missing components: kube-dns
	I0926 23:53:23.643216   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:23.643261   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:23.643270   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:23.643280   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:23.643285   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:23.643291   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:23.643295   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:23.643302   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:53:23.643321   64230 retry.go:31] will retry after 399.819321ms: missing components: kube-dns
	I0926 23:53:24.049673   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:24.049706   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:24.049712   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:24.049719   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:24.049722   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:24.049731   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:24.049735   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:24.049740   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Running
	I0926 23:53:24.049754   64230 retry.go:31] will retry after 558.110871ms: missing components: kube-dns
	I0926 23:53:23.235791   66389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 23:53:23.235846   66389 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21642-6020/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-6020/.minikube}
	I0926 23:53:23.235913   66389 buildroot.go:174] setting up certificates
	I0926 23:53:23.235928   66389 provision.go:84] configureAuth start
	I0926 23:53:23.235947   66389 main.go:141] libmachine: (bridge-421834) Calling .GetMachineName
	I0926 23:53:23.236275   66389 main.go:141] libmachine: (bridge-421834) Calling .GetIP
	I0926 23:53:23.239811   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:23.240273   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:23.240310   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:23.240505   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:23.243538   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:23.244109   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:23.244141   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:23.244422   66389 provision.go:143] copyHostCerts
	I0926 23:53:23.244482   66389 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem, removing ...
	I0926 23:53:23.244505   66389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem
	I0926 23:53:23.244595   66389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem (1082 bytes)
	I0926 23:53:23.244725   66389 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem, removing ...
	I0926 23:53:23.244735   66389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem
	I0926 23:53:23.244768   66389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem (1123 bytes)
	I0926 23:53:23.244877   66389 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem, removing ...
	I0926 23:53:23.244889   66389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem
	I0926 23:53:23.244930   66389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem (1675 bytes)
	I0926 23:53:23.245040   66389 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem org=jenkins.bridge-421834 san=[127.0.0.1 192.168.61.22 bridge-421834 localhost minikube]
	I0926 23:53:23.618556   66389 provision.go:177] copyRemoteCerts
	I0926 23:53:23.618624   66389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 23:53:23.618646   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:23.622767   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:23.623330   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:23.623361   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:23.623653   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:23.623913   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:23.624121   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:23.624261   66389 sshutil.go:53] new ssh client: &{IP:192.168.61.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/id_rsa Username:docker}
	I0926 23:53:23.723185   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 23:53:23.788584   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 23:53:23.842082   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0926 23:53:23.880662   66389 provision.go:87] duration metric: took 644.71758ms to configureAuth
	I0926 23:53:23.880692   66389 buildroot.go:189] setting minikube options for container-runtime
	I0926 23:53:23.880916   66389 config.go:182] Loaded profile config "bridge-421834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:53:23.880994   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:23.884495   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:23.885063   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:23.885098   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:23.885463   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:23.885699   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:23.885924   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:23.886122   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:23.886522   66389 main.go:141] libmachine: Using SSH client type: native
	I0926 23:53:23.886722   66389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.22 22 <nil> <nil>}
	I0926 23:53:23.886736   66389 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0926 23:53:24.148919   66389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0926 23:53:24.148950   66389 main.go:141] libmachine: Checking connection to Docker...
	I0926 23:53:24.148961   66389 main.go:141] libmachine: (bridge-421834) Calling .GetURL
	I0926 23:53:24.150275   66389 main.go:141] libmachine: (bridge-421834) DBG | using libvirt version 8000000
	I0926 23:53:24.153008   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.153384   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:24.153432   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.153622   66389 main.go:141] libmachine: Docker is up and running!
	I0926 23:53:24.153636   66389 main.go:141] libmachine: Reticulating splines...
	I0926 23:53:24.153642   66389 client.go:171] duration metric: took 20.80850247s to LocalClient.Create
	I0926 23:53:24.153664   66389 start.go:167] duration metric: took 20.808590624s to libmachine.API.Create "bridge-421834"
	I0926 23:53:24.153671   66389 start.go:293] postStartSetup for "bridge-421834" (driver="kvm2")
	I0926 23:53:24.153679   66389 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 23:53:24.153702   66389 main.go:141] libmachine: (bridge-421834) Calling .DriverName
	I0926 23:53:24.153959   66389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 23:53:24.153981   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:24.157161   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.157549   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:24.157581   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.157747   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:24.157970   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:24.158135   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:24.158262   66389 sshutil.go:53] new ssh client: &{IP:192.168.61.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/id_rsa Username:docker}
	I0926 23:53:24.242760   66389 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 23:53:24.248423   66389 info.go:137] Remote host: Buildroot 2025.02
	I0926 23:53:24.248454   66389 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-6020/.minikube/addons for local assets ...
	I0926 23:53:24.248546   66389 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-6020/.minikube/files for local assets ...
	I0926 23:53:24.248672   66389 filesync.go:149] local asset: /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem -> 99142.pem in /etc/ssl/certs
	I0926 23:53:24.248843   66389 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 23:53:24.261877   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem --> /etc/ssl/certs/99142.pem (1708 bytes)
	I0926 23:53:24.294992   66389 start.go:296] duration metric: took 141.309355ms for postStartSetup
	I0926 23:53:24.295056   66389 main.go:141] libmachine: (bridge-421834) Calling .GetConfigRaw
	I0926 23:53:24.295859   66389 main.go:141] libmachine: (bridge-421834) Calling .GetIP
	I0926 23:53:24.299304   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.299686   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:24.299714   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.300033   66389 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/config.json ...
	I0926 23:53:24.300312   66389 start.go:128] duration metric: took 20.974125431s to createHost
	I0926 23:53:24.300339   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:24.303319   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.303715   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:24.303749   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.303928   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:24.304158   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:24.304347   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:24.304471   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:24.304655   66389 main.go:141] libmachine: Using SSH client type: native
	I0926 23:53:24.304919   66389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.22 22 <nil> <nil>}
	I0926 23:53:24.304931   66389 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 23:53:24.415992   66389 main.go:141] libmachine: SSH cmd err, output: <nil>: 1758930804.377640726
	
	I0926 23:53:24.416017   66389 fix.go:216] guest clock: 1758930804.377640726
	I0926 23:53:24.416024   66389 fix.go:229] Guest: 2025-09-26 23:53:24.377640726 +0000 UTC Remote: 2025-09-26 23:53:24.300327312 +0000 UTC m=+21.115024473 (delta=77.313414ms)
	I0926 23:53:24.416044   66389 fix.go:200] guest clock delta is within tolerance: 77.313414ms
	I0926 23:53:24.416048   66389 start.go:83] releasing machines lock for "bridge-421834", held for 21.089950951s
	I0926 23:53:24.416073   66389 main.go:141] libmachine: (bridge-421834) Calling .DriverName
	I0926 23:53:24.416376   66389 main.go:141] libmachine: (bridge-421834) Calling .GetIP
	I0926 23:53:24.419489   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.419871   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:24.419893   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.420150   66389 main.go:141] libmachine: (bridge-421834) Calling .DriverName
	I0926 23:53:24.420725   66389 main.go:141] libmachine: (bridge-421834) Calling .DriverName
	I0926 23:53:24.420935   66389 main.go:141] libmachine: (bridge-421834) Calling .DriverName
	I0926 23:53:24.421036   66389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 23:53:24.421085   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:24.421190   66389 ssh_runner.go:195] Run: cat /version.json
	I0926 23:53:24.421211   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:24.424480   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.424612   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.424970   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:24.424994   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.425021   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:24.425059   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.425157   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:24.425409   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:24.425420   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:24.425603   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:24.425701   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:24.425785   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:24.425891   66389 sshutil.go:53] new ssh client: &{IP:192.168.61.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/id_rsa Username:docker}
	I0926 23:53:24.425962   66389 sshutil.go:53] new ssh client: &{IP:192.168.61.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/id_rsa Username:docker}
	I0926 23:53:24.508618   66389 ssh_runner.go:195] Run: systemctl --version
	I0926 23:53:24.539059   66389 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0926 23:53:24.702907   66389 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 23:53:24.710782   66389 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 23:53:24.710886   66389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 23:53:24.734076   66389 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 23:53:24.734098   66389 start.go:495] detecting cgroup driver to use...
	I0926 23:53:24.734153   66389 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 23:53:24.756401   66389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 23:53:24.778106   66389 docker.go:218] disabling cri-docker service (if available) ...
	I0926 23:53:24.778184   66389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0926 23:53:24.799542   66389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0926 23:53:24.822114   66389 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0926 23:53:24.996303   66389 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0926 23:53:25.224791   66389 docker.go:234] disabling docker service ...
	I0926 23:53:25.224891   66389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0926 23:53:25.243432   66389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0926 23:53:25.259878   66389 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0926 23:53:25.431451   66389 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0926 23:53:25.599621   66389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 23:53:25.616957   66389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 23:53:25.643436   66389 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0926 23:53:25.643526   66389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:53:25.657988   66389 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0926 23:53:25.658047   66389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:53:25.672857   66389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:53:25.688715   66389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:53:25.709342   66389 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 23:53:25.727302   66389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:53:25.744379   66389 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:53:25.770014   66389 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:53:25.784756   66389 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 23:53:25.796461   66389 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 23:53:25.796554   66389 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 23:53:25.823440   66389 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 23:53:25.838860   66389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:53:25.999026   66389 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0926 23:53:26.127274   66389 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0926 23:53:26.127366   66389 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0926 23:53:26.133585   66389 start.go:563] Will wait 60s for crictl version
	I0926 23:53:26.133665   66389 ssh_runner.go:195] Run: which crictl
	I0926 23:53:26.138367   66389 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 23:53:26.189930   66389 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0926 23:53:26.190029   66389 ssh_runner.go:195] Run: crio --version
	I0926 23:53:26.223605   66389 ssh_runner.go:195] Run: crio --version
	I0926 23:53:26.281579   66389 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0926 23:53:26.282887   66389 main.go:141] libmachine: (bridge-421834) Calling .GetIP
	I0926 23:53:26.286371   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:26.286847   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:26.286880   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:26.287176   66389 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0926 23:53:26.292885   66389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 23:53:26.312843   66389 kubeadm.go:883] updating cluster {Name:bridge-421834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-421
834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.22 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 23:53:26.312973   66389 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 23:53:26.313032   66389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:53:26.352138   66389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0926 23:53:26.352234   66389 ssh_runner.go:195] Run: which lz4
	I0926 23:53:26.357081   66389 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0926 23:53:26.362557   66389 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0926 23:53:26.362599   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0926 23:53:28.137751   66389 crio.go:462] duration metric: took 1.780698913s to copy over tarball
	I0926 23:53:28.137885   66389 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0926 23:53:24.614082   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:24.614128   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:24.614139   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:24.614150   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:24.614157   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:24.614163   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:24.614169   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:24.614181   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Running
	I0926 23:53:24.614205   64230 retry.go:31] will retry after 773.437901ms: missing components: kube-dns
	I0926 23:53:25.393682   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:25.393730   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:25.393738   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:25.393746   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:25.393753   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:25.393759   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:25.393779   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:25.393789   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Running
	I0926 23:53:25.393806   64230 retry.go:31] will retry after 1.022431217s: missing components: kube-dns
	I0926 23:53:26.420976   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:26.421026   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:26.421036   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:26.421044   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:26.421052   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:26.421059   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:26.421065   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:26.421073   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Running
	I0926 23:53:26.421092   64230 retry.go:31] will retry after 1.319572477s: missing components: kube-dns
	I0926 23:53:27.746429   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:27.746483   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:27.746496   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:27.746504   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:27.746516   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:27.746523   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:27.746528   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:27.746536   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Running
	I0926 23:53:27.746554   64230 retry.go:31] will retry after 1.82235326s: missing components: kube-dns
	I0926 23:53:30.014215   66389 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.876301553s)
	I0926 23:53:30.014243   66389 crio.go:469] duration metric: took 1.876457477s to extract the tarball
	I0926 23:53:30.014251   66389 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0926 23:53:30.059487   66389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:53:30.115146   66389 crio.go:514] all images are preloaded for cri-o runtime.
	I0926 23:53:30.115176   66389 cache_images.go:85] Images are preloaded, skipping loading
	I0926 23:53:30.115187   66389 kubeadm.go:934] updating node { 192.168.61.22 8443 v1.34.0 crio true true} ...
	I0926 23:53:30.115308   66389 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-421834 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:bridge-421834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0926 23:53:30.115388   66389 ssh_runner.go:195] Run: crio config
	I0926 23:53:30.167607   66389 cni.go:84] Creating CNI manager for "bridge"
	I0926 23:53:30.167639   66389 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 23:53:30.167667   66389 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.22 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-421834 NodeName:bridge-421834 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 23:53:30.167811   66389 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-421834"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.22"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.22"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 23:53:30.167908   66389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0926 23:53:30.180734   66389 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 23:53:30.180805   66389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 23:53:30.194149   66389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0926 23:53:30.217696   66389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 23:53:30.240739   66389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0926 23:53:30.264328   66389 ssh_runner.go:195] Run: grep 192.168.61.22	control-plane.minikube.internal$ /etc/hosts
	I0926 23:53:30.269648   66389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 23:53:30.287057   66389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:53:30.442383   66389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:53:30.479080   66389 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834 for IP: 192.168.61.22
	I0926 23:53:30.479099   66389 certs.go:195] generating shared ca certs ...
	I0926 23:53:30.479113   66389 certs.go:227] acquiring lock for ca certs: {Name:mk9e164f84dd227cf84a459eec91beae2bb75a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:30.479292   66389 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key
	I0926 23:53:30.479364   66389 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key
	I0926 23:53:30.479379   66389 certs.go:257] generating profile certs ...
	I0926 23:53:30.479454   66389 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.key
	I0926 23:53:30.479470   66389 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt with IP's: []
	I0926 23:53:30.614117   66389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt ...
	I0926 23:53:30.614146   66389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: {Name:mk17199a9894daa8e1fa3f5d03c581f8755160b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:30.614322   66389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.key ...
	I0926 23:53:30.614333   66389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.key: {Name:mk5b79db2f23a0408c20d1d2457c1875b85a52ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:30.614409   66389 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.key.f08dc562
	I0926 23:53:30.614425   66389 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.crt.f08dc562 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.22]
	I0926 23:53:30.798397   66389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.crt.f08dc562 ...
	I0926 23:53:30.798424   66389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.crt.f08dc562: {Name:mkbe05319d1195665a56244768f88be845598026 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:30.798593   66389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.key.f08dc562 ...
	I0926 23:53:30.798609   66389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.key.f08dc562: {Name:mkb5acbb2d9a9d4b3b899cbffa845b207e16c72e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:30.798682   66389 certs.go:382] copying /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.crt.f08dc562 -> /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.crt
	I0926 23:53:30.798776   66389 certs.go:386] copying /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.key.f08dc562 -> /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.key
	I0926 23:53:30.798853   66389 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/proxy-client.key
	I0926 23:53:30.798865   66389 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/proxy-client.crt with IP's: []
	I0926 23:53:31.109615   66389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/proxy-client.crt ...
	I0926 23:53:31.109646   66389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/proxy-client.crt: {Name:mkfb5969364c71ffbef78a5f55d4f61e4da59e2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:31.109859   66389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/proxy-client.key ...
	I0926 23:53:31.109877   66389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/proxy-client.key: {Name:mk487c1900a9dcdeef7b8e4b33f6ca9e9211812a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:31.110063   66389 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914.pem (1338 bytes)
	W0926 23:53:31.110102   66389 certs.go:480] ignoring /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914_empty.pem, impossibly tiny 0 bytes
	I0926 23:53:31.110111   66389 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem (1679 bytes)
	I0926 23:53:31.110132   66389 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem (1082 bytes)
	I0926 23:53:31.110153   66389 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem (1123 bytes)
	I0926 23:53:31.110176   66389 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem (1675 bytes)
	I0926 23:53:31.110212   66389 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem (1708 bytes)
	I0926 23:53:31.110843   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 23:53:31.150929   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0926 23:53:31.199098   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 23:53:31.246838   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 23:53:31.283435   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0926 23:53:31.320685   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 23:53:31.355710   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 23:53:31.393463   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 23:53:31.431568   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem --> /usr/share/ca-certificates/99142.pem (1708 bytes)
	I0926 23:53:31.465636   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 23:53:31.499181   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914.pem --> /usr/share/ca-certificates/9914.pem (1338 bytes)
	I0926 23:53:31.532864   66389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 23:53:31.559868   66389 ssh_runner.go:195] Run: openssl version
	I0926 23:53:31.569935   66389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99142.pem && ln -fs /usr/share/ca-certificates/99142.pem /etc/ssl/certs/99142.pem"
	I0926 23:53:31.587038   66389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99142.pem
	I0926 23:53:31.593896   66389 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 26 22:43 /usr/share/ca-certificates/99142.pem
	I0926 23:53:31.593977   66389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99142.pem
	I0926 23:53:31.603261   66389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99142.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 23:53:31.620931   66389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 23:53:31.636700   66389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:53:31.642977   66389 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:53:31.643036   66389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:53:31.651130   66389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 23:53:31.668900   66389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9914.pem && ln -fs /usr/share/ca-certificates/9914.pem /etc/ssl/certs/9914.pem"
	I0926 23:53:31.687844   66389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9914.pem
	I0926 23:53:31.695764   66389 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 26 22:43 /usr/share/ca-certificates/9914.pem
	I0926 23:53:31.695857   66389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9914.pem
	I0926 23:53:31.705293   66389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9914.pem /etc/ssl/certs/51391683.0"
	I0926 23:53:31.721060   66389 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 23:53:31.726820   66389 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 23:53:31.726909   66389 kubeadm.go:400] StartCluster: {Name:bridge-421834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-421834
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.22 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 23:53:31.726989   66389 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0926 23:53:31.727056   66389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0926 23:53:31.773517   66389 cri.go:89] found id: ""
	I0926 23:53:31.773584   66389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 23:53:31.787140   66389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 23:53:31.802588   66389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 23:53:31.819198   66389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 23:53:31.819220   66389 kubeadm.go:157] found existing configuration files:
	
	I0926 23:53:31.819279   66389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 23:53:31.838315   66389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 23:53:31.838392   66389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 23:53:31.854112   66389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 23:53:31.868738   66389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 23:53:31.868806   66389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 23:53:31.888357   66389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 23:53:31.910570   66389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 23:53:31.910649   66389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 23:53:31.929211   66389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 23:53:31.941990   66389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 23:53:31.942065   66389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 23:53:31.956055   66389 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0926 23:53:32.131816   66389 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 23:53:29.574916   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:29.574958   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:29.574968   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:29.574974   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:29.574979   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:29.574985   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:29.574990   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:29.574994   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Running
	I0926 23:53:29.575015   64230 retry.go:31] will retry after 1.825517142s: missing components: kube-dns
	I0926 23:53:31.553883   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:31.553924   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:31.553933   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:31.553943   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:31.553949   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:31.553957   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:31.553962   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:31.553968   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Running
	I0926 23:53:31.553988   64230 retry.go:31] will retry after 2.267864987s: missing components: kube-dns
	I0926 23:53:33.828310   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:33.828346   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:33.828356   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:33.828364   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:33.828370   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:33.828381   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:33.828390   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:33.828401   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Running
	I0926 23:53:33.828431   64230 retry.go:31] will retry after 2.442062906s: missing components: kube-dns
	I0926 23:53:36.276431   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:36.276464   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:36.276472   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:36.276481   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:36.276489   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:36.276494   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:36.276499   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:36.276506   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Running
	I0926 23:53:36.276528   64230 retry.go:31] will retry after 3.88102041s: missing components: kube-dns
	I0926 23:53:40.166704   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:40.166736   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Running
	I0926 23:53:40.166743   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:40.166749   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:40.166755   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:40.166760   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:40.166765   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:40.166769   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Running
	I0926 23:53:40.166779   64230 system_pods.go:126] duration metric: took 17.577811922s to wait for k8s-apps to be running ...
	I0926 23:53:40.166788   64230 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 23:53:40.166856   64230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:53:40.191875   64230 system_svc.go:56] duration metric: took 25.068251ms WaitForService to wait for kubelet
	I0926 23:53:40.191923   64230 kubeadm.go:586] duration metric: took 24.475441358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 23:53:40.191944   64230 node_conditions.go:102] verifying NodePressure condition ...
	I0926 23:53:40.196633   64230 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 23:53:40.196668   64230 node_conditions.go:123] node cpu capacity is 2
	I0926 23:53:40.196689   64230 node_conditions.go:105] duration metric: took 4.737391ms to run NodePressure ...
	I0926 23:53:40.196703   64230 start.go:241] waiting for startup goroutines ...
	I0926 23:53:40.196714   64230 start.go:246] waiting for cluster config update ...
	I0926 23:53:40.196729   64230 start.go:255] writing updated cluster config ...
	I0926 23:53:40.197083   64230 ssh_runner.go:195] Run: rm -f paused
	I0926 23:53:40.205632   64230 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:53:40.211699   64230 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mqjzf" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:40.220181   64230 pod_ready.go:94] pod "coredns-66bc5c9577-mqjzf" is "Ready"
	I0926 23:53:40.220217   64230 pod_ready.go:86] duration metric: took 8.486544ms for pod "coredns-66bc5c9577-mqjzf" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:40.224240   64230 pod_ready.go:83] waiting for pod "etcd-flannel-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:40.233287   64230 pod_ready.go:94] pod "etcd-flannel-421834" is "Ready"
	I0926 23:53:40.233354   64230 pod_ready.go:86] duration metric: took 9.081499ms for pod "etcd-flannel-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:40.237176   64230 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:40.243744   64230 pod_ready.go:94] pod "kube-apiserver-flannel-421834" is "Ready"
	I0926 23:53:40.243771   64230 pod_ready.go:86] duration metric: took 6.565667ms for pod "kube-apiserver-flannel-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:40.246287   64230 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:40.611165   64230 pod_ready.go:94] pod "kube-controller-manager-flannel-421834" is "Ready"
	I0926 23:53:40.611197   64230 pod_ready.go:86] duration metric: took 364.881268ms for pod "kube-controller-manager-flannel-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:40.811021   64230 pod_ready.go:83] waiting for pod "kube-proxy-4mmdk" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:41.210445   64230 pod_ready.go:94] pod "kube-proxy-4mmdk" is "Ready"
	I0926 23:53:41.210487   64230 pod_ready.go:86] duration metric: took 399.43112ms for pod "kube-proxy-4mmdk" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:41.413016   64230 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:41.811194   64230 pod_ready.go:94] pod "kube-scheduler-flannel-421834" is "Ready"
	I0926 23:53:41.811229   64230 pod_ready.go:86] duration metric: took 398.178042ms for pod "kube-scheduler-flannel-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:41.811245   64230 pod_ready.go:40] duration metric: took 1.605582664s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:53:41.862469   64230 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 23:53:41.865112   64230 out.go:179] * Done! kubectl is now configured to use "flannel-421834" cluster and "default" namespace by default
	I0926 23:53:44.841714   66389 kubeadm.go:318] [init] Using Kubernetes version: v1.34.0
	I0926 23:53:44.841815   66389 kubeadm.go:318] [preflight] Running pre-flight checks
	I0926 23:53:44.841914   66389 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 23:53:44.842004   66389 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 23:53:44.842131   66389 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0926 23:53:44.842235   66389 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 23:53:44.843943   66389 out.go:252]   - Generating certificates and keys ...
	I0926 23:53:44.844024   66389 kubeadm.go:318] [certs] Using existing ca certificate authority
	I0926 23:53:44.844106   66389 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I0926 23:53:44.844175   66389 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0926 23:53:44.844225   66389 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I0926 23:53:44.844282   66389 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I0926 23:53:44.844326   66389 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I0926 23:53:44.844389   66389 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I0926 23:53:44.844572   66389 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [bridge-421834 localhost] and IPs [192.168.61.22 127.0.0.1 ::1]
	I0926 23:53:44.844659   66389 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I0926 23:53:44.844845   66389 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [bridge-421834 localhost] and IPs [192.168.61.22 127.0.0.1 ::1]
	I0926 23:53:44.844938   66389 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0926 23:53:44.845032   66389 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I0926 23:53:44.845103   66389 kubeadm.go:318] [certs] Generating "sa" key and public key
	I0926 23:53:44.845210   66389 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 23:53:44.845322   66389 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 23:53:44.845413   66389 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 23:53:44.845503   66389 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 23:53:44.845593   66389 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 23:53:44.845704   66389 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 23:53:44.845843   66389 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 23:53:44.845941   66389 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 23:53:44.847026   66389 out.go:252]   - Booting up control plane ...
	I0926 23:53:44.847119   66389 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 23:53:44.847226   66389 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 23:53:44.847299   66389 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 23:53:44.847399   66389 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 23:53:44.847566   66389 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0926 23:53:44.847718   66389 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0926 23:53:44.847805   66389 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 23:53:44.847893   66389 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I0926 23:53:44.848049   66389 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0926 23:53:44.848180   66389 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0926 23:53:44.848245   66389 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001452515s
	I0926 23:53:44.848336   66389 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0926 23:53:44.848413   66389 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.61.22:8443/livez
	I0926 23:53:44.848552   66389 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0926 23:53:44.848656   66389 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0926 23:53:44.848759   66389 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.899153305s
	I0926 23:53:44.848883   66389 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.330601769s
	I0926 23:53:44.848976   66389 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502741118s
	I0926 23:53:44.849097   66389 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 23:53:44.849243   66389 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 23:53:44.849331   66389 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 23:53:44.849526   66389 kubeadm.go:318] [mark-control-plane] Marking the node bridge-421834 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 23:53:44.849603   66389 kubeadm.go:318] [bootstrap-token] Using token: kd6815.ojx3n455o8zykny6
	I0926 23:53:44.850986   66389 out.go:252]   - Configuring RBAC rules ...
	I0926 23:53:44.851099   66389 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 23:53:44.851228   66389 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 23:53:44.851433   66389 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 23:53:44.851642   66389 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 23:53:44.851750   66389 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 23:53:44.851854   66389 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 23:53:44.851998   66389 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 23:53:44.852066   66389 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I0926 23:53:44.852126   66389 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I0926 23:53:44.852132   66389 kubeadm.go:318] 
	I0926 23:53:44.852186   66389 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I0926 23:53:44.852192   66389 kubeadm.go:318] 
	I0926 23:53:44.852279   66389 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I0926 23:53:44.852297   66389 kubeadm.go:318] 
	I0926 23:53:44.852331   66389 kubeadm.go:318]   mkdir -p $HOME/.kube
	I0926 23:53:44.852427   66389 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 23:53:44.852477   66389 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 23:53:44.852487   66389 kubeadm.go:318] 
	I0926 23:53:44.852539   66389 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I0926 23:53:44.852543   66389 kubeadm.go:318] 
	I0926 23:53:44.852616   66389 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 23:53:44.852630   66389 kubeadm.go:318] 
	I0926 23:53:44.852700   66389 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I0926 23:53:44.852769   66389 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 23:53:44.852855   66389 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 23:53:44.852862   66389 kubeadm.go:318] 
	I0926 23:53:44.852936   66389 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 23:53:44.853012   66389 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I0926 23:53:44.853018   66389 kubeadm.go:318] 
	I0926 23:53:44.853090   66389 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token kd6815.ojx3n455o8zykny6 \
	I0926 23:53:44.853182   66389 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1bc065dc0287f5108511f75d77232285046ef3d632aca3b6b4eb77abcecaa58 \
	I0926 23:53:44.853211   66389 kubeadm.go:318] 	--control-plane 
	I0926 23:53:44.853217   66389 kubeadm.go:318] 
	I0926 23:53:44.853290   66389 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I0926 23:53:44.853300   66389 kubeadm.go:318] 
	I0926 23:53:44.853415   66389 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token kd6815.ojx3n455o8zykny6 \
	I0926 23:53:44.853569   66389 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1bc065dc0287f5108511f75d77232285046ef3d632aca3b6b4eb77abcecaa58 
	I0926 23:53:44.853606   66389 cni.go:84] Creating CNI manager for "bridge"
	I0926 23:53:44.855848   66389 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0926 23:53:44.856961   66389 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 23:53:44.873344   66389 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 23:53:44.906446   66389 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 23:53:44.906523   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:44.906598   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-421834 minikube.k8s.io/updated_at=2025_09_26T23_53_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47 minikube.k8s.io/name=bridge-421834 minikube.k8s.io/primary=true
	I0926 23:53:45.063070   66389 ops.go:34] apiserver oom_adj: -16
	I0926 23:53:45.063206   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:45.563664   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:46.063658   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:46.563350   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:47.063599   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:47.564089   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:48.063624   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:48.564058   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:49.064075   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:49.563360   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:50.064065   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:50.236803   66389 kubeadm.go:1113] duration metric: took 5.330336807s to wait for elevateKubeSystemPrivileges
	I0926 23:53:50.236867   66389 kubeadm.go:402] duration metric: took 18.509960989s to StartCluster
	I0926 23:53:50.236892   66389 settings.go:142] acquiring lock: {Name:mk8a46d5a99d51096f5a73696c8b5f570ce357f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:50.236965   66389 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 23:53:50.239258   66389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/kubeconfig: {Name:mkc92bf76d8ba21d0a2b0bb28107401b61549063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:50.239549   66389 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.22 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 23:53:50.239613   66389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0926 23:53:50.239650   66389 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 23:53:50.239753   66389 addons.go:69] Setting storage-provisioner=true in profile "bridge-421834"
	I0926 23:53:50.239776   66389 addons.go:69] Setting default-storageclass=true in profile "bridge-421834"
	I0926 23:53:50.239780   66389 addons.go:238] Setting addon storage-provisioner=true in "bridge-421834"
	I0926 23:53:50.239799   66389 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-421834"
	I0926 23:53:50.239811   66389 host.go:66] Checking if "bridge-421834" exists ...
	I0926 23:53:50.239816   66389 config.go:182] Loaded profile config "bridge-421834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:53:50.240362   66389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:50.240364   66389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:50.240408   66389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:50.240427   66389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:50.241120   66389 out.go:179] * Verifying Kubernetes components...
	I0926 23:53:50.242275   66389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:53:50.256055   66389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37089
	I0926 23:53:50.256705   66389 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:50.257229   66389 main.go:141] libmachine: Using API Version  1
	I0926 23:53:50.257252   66389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:50.257648   66389 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:50.257929   66389 main.go:141] libmachine: (bridge-421834) Calling .GetState
	I0926 23:53:50.258084   66389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I0926 23:53:50.258574   66389 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:50.259226   66389 main.go:141] libmachine: Using API Version  1
	I0926 23:53:50.259250   66389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:50.259660   66389 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:50.260226   66389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:50.260268   66389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:50.262397   66389 addons.go:238] Setting addon default-storageclass=true in "bridge-421834"
	I0926 23:53:50.262469   66389 host.go:66] Checking if "bridge-421834" exists ...
	I0926 23:53:50.262906   66389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:50.262949   66389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:50.277885   66389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45437
	I0926 23:53:50.278377   66389 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:50.278487   66389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0926 23:53:50.279074   66389 main.go:141] libmachine: Using API Version  1
	I0926 23:53:50.279098   66389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:50.279165   66389 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:50.279512   66389 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:50.279700   66389 main.go:141] libmachine: Using API Version  1
	I0926 23:53:50.279725   66389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:50.280137   66389 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:50.280296   66389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:50.280345   66389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:50.280478   66389 main.go:141] libmachine: (bridge-421834) Calling .GetState
	I0926 23:53:50.282817   66389 main.go:141] libmachine: (bridge-421834) Calling .DriverName
	I0926 23:53:50.284486   66389 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 23:53:50.285795   66389 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:53:50.285814   66389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 23:53:50.285845   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:50.289682   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:50.290252   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:50.290314   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:50.290601   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:50.290876   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:50.291069   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:50.291213   66389 sshutil.go:53] new ssh client: &{IP:192.168.61.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/id_rsa Username:docker}
	I0926 23:53:50.297456   66389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39507
	I0926 23:53:50.298056   66389 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:50.298699   66389 main.go:141] libmachine: Using API Version  1
	I0926 23:53:50.298723   66389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:50.299110   66389 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:50.299293   66389 main.go:141] libmachine: (bridge-421834) Calling .GetState
	I0926 23:53:50.301249   66389 main.go:141] libmachine: (bridge-421834) Calling .DriverName
	I0926 23:53:50.301483   66389 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 23:53:50.301502   66389 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 23:53:50.301519   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:50.305236   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:50.305851   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:50.305879   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:50.306072   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:50.306224   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:50.306355   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:50.306457   66389 sshutil.go:53] new ssh client: &{IP:192.168.61.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/id_rsa Username:docker}
	I0926 23:53:50.712116   66389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 23:53:50.752112   66389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:53:50.752209   66389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0926 23:53:51.031357   66389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:53:51.373443   66389 main.go:141] libmachine: Making call to close driver server
	I0926 23:53:51.373476   66389 main.go:141] libmachine: (bridge-421834) Calling .Close
	I0926 23:53:51.373784   66389 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:53:51.373799   66389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:53:51.373808   66389 main.go:141] libmachine: Making call to close driver server
	I0926 23:53:51.373816   66389 main.go:141] libmachine: (bridge-421834) Calling .Close
	I0926 23:53:51.374092   66389 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:53:51.374105   66389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:53:51.374738   66389 node_ready.go:35] waiting up to 15m0s for node "bridge-421834" to be "Ready" ...
	I0926 23:53:51.398770   66389 main.go:141] libmachine: Making call to close driver server
	I0926 23:53:51.398799   66389 main.go:141] libmachine: (bridge-421834) Calling .Close
	I0926 23:53:51.399100   66389 main.go:141] libmachine: (bridge-421834) DBG | Closing plugin on server side
	I0926 23:53:51.399144   66389 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:53:51.399153   66389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:53:51.400667   66389 node_ready.go:49] node "bridge-421834" is "Ready"
	I0926 23:53:51.400698   66389 node_ready.go:38] duration metric: took 25.944029ms for node "bridge-421834" to be "Ready" ...
	I0926 23:53:51.400723   66389 api_server.go:52] waiting for apiserver process to appear ...
	I0926 23:53:51.400782   66389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:53:51.803695   66389 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.051437014s)
	I0926 23:53:51.803741   66389 start.go:976] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0926 23:53:52.332402   66389 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-421834" context rescaled to 1 replicas
	I0926 23:53:52.346710   66389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.315306237s)
	I0926 23:53:52.346793   66389 main.go:141] libmachine: Making call to close driver server
	I0926 23:53:52.346798   66389 api_server.go:72] duration metric: took 2.107212078s to wait for apiserver process to appear ...
	I0926 23:53:52.346813   66389 main.go:141] libmachine: (bridge-421834) Calling .Close
	I0926 23:53:52.346821   66389 api_server.go:88] waiting for apiserver healthz status ...
	I0926 23:53:52.346950   66389 api_server.go:253] Checking apiserver healthz at https://192.168.61.22:8443/healthz ...
	I0926 23:53:52.347191   66389 main.go:141] libmachine: (bridge-421834) DBG | Closing plugin on server side
	I0926 23:53:52.347197   66389 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:53:52.347217   66389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:53:52.347229   66389 main.go:141] libmachine: Making call to close driver server
	I0926 23:53:52.347240   66389 main.go:141] libmachine: (bridge-421834) Calling .Close
	I0926 23:53:52.347568   66389 main.go:141] libmachine: (bridge-421834) DBG | Closing plugin on server side
	I0926 23:53:52.347609   66389 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:53:52.347621   66389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:53:52.349314   66389 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0926 23:53:52.350678   66389 addons.go:514] duration metric: took 2.1110452s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0926 23:53:52.366659   66389 api_server.go:279] https://192.168.61.22:8443/healthz returned 200:
	ok
	I0926 23:53:52.370284   66389 api_server.go:141] control plane version: v1.34.0
	I0926 23:53:52.370317   66389 api_server.go:131] duration metric: took 23.391786ms to wait for apiserver health ...
	I0926 23:53:52.370337   66389 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 23:53:52.379615   66389 system_pods.go:59] 8 kube-system pods found
	I0926 23:53:52.379691   66389 system_pods.go:61] "coredns-66bc5c9577-49fzk" [050d4bb7-2fdd-4189-bfae-c181677f0679] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:52.379712   66389 system_pods.go:61] "coredns-66bc5c9577-xw5nt" [08e9bf35-7bae-413d-be70-89061055577c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:52.379733   66389 system_pods.go:61] "etcd-bridge-421834" [b99ecd0b-dc3b-4a78-96e6-5a8be43fabef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:53:52.379743   66389 system_pods.go:61] "kube-apiserver-bridge-421834" [424b8a50-2bf9-4266-801b-34046706404f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:53:52.379774   66389 system_pods.go:61] "kube-controller-manager-bridge-421834" [cf084341-1e40-4135-b0ef-1256ede5ba8e] Running
	I0926 23:53:52.379784   66389 system_pods.go:61] "kube-proxy-x9dj6" [4cc990be-9a6e-45a7-b922-3fe73d1d9dd3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 23:53:52.379789   66389 system_pods.go:61] "kube-scheduler-bridge-421834" [1c250e7e-aa0e-4500-90b8-ab40d07e0806] Running
	I0926 23:53:52.379796   66389 system_pods.go:61] "storage-provisioner" [d6f2c195-dcfe-4d02-9f7d-d41adcd6dd65] Pending
	I0926 23:53:52.379805   66389 system_pods.go:74] duration metric: took 9.459647ms to wait for pod list to return data ...
	I0926 23:53:52.379842   66389 default_sa.go:34] waiting for default service account to be created ...
	I0926 23:53:52.402179   66389 default_sa.go:45] found service account: "default"
	I0926 23:53:52.402215   66389 default_sa.go:55] duration metric: took 22.362847ms for default service account to be created ...
	I0926 23:53:52.402228   66389 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 23:53:52.409473   66389 system_pods.go:86] 8 kube-system pods found
	I0926 23:53:52.409514   66389 system_pods.go:89] "coredns-66bc5c9577-49fzk" [050d4bb7-2fdd-4189-bfae-c181677f0679] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:52.409542   66389 system_pods.go:89] "coredns-66bc5c9577-xw5nt" [08e9bf35-7bae-413d-be70-89061055577c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:52.409560   66389 system_pods.go:89] "etcd-bridge-421834" [b99ecd0b-dc3b-4a78-96e6-5a8be43fabef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:53:52.409574   66389 system_pods.go:89] "kube-apiserver-bridge-421834" [424b8a50-2bf9-4266-801b-34046706404f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:53:52.409584   66389 system_pods.go:89] "kube-controller-manager-bridge-421834" [cf084341-1e40-4135-b0ef-1256ede5ba8e] Running
	I0926 23:53:52.409596   66389 system_pods.go:89] "kube-proxy-x9dj6" [4cc990be-9a6e-45a7-b922-3fe73d1d9dd3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 23:53:52.409606   66389 system_pods.go:89] "kube-scheduler-bridge-421834" [1c250e7e-aa0e-4500-90b8-ab40d07e0806] Running
	I0926 23:53:52.409619   66389 system_pods.go:89] "storage-provisioner" [d6f2c195-dcfe-4d02-9f7d-d41adcd6dd65] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:53:52.409654   66389 retry.go:31] will retry after 207.463574ms: missing components: kube-dns, kube-proxy
	I0926 23:53:52.623107   66389 system_pods.go:86] 8 kube-system pods found
	I0926 23:53:52.623139   66389 system_pods.go:89] "coredns-66bc5c9577-49fzk" [050d4bb7-2fdd-4189-bfae-c181677f0679] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:52.623146   66389 system_pods.go:89] "coredns-66bc5c9577-xw5nt" [08e9bf35-7bae-413d-be70-89061055577c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:52.623153   66389 system_pods.go:89] "etcd-bridge-421834" [b99ecd0b-dc3b-4a78-96e6-5a8be43fabef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:53:52.623163   66389 system_pods.go:89] "kube-apiserver-bridge-421834" [424b8a50-2bf9-4266-801b-34046706404f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:53:52.623167   66389 system_pods.go:89] "kube-controller-manager-bridge-421834" [cf084341-1e40-4135-b0ef-1256ede5ba8e] Running
	I0926 23:53:52.623171   66389 system_pods.go:89] "kube-proxy-x9dj6" [4cc990be-9a6e-45a7-b922-3fe73d1d9dd3] Running
	I0926 23:53:52.623175   66389 system_pods.go:89] "kube-scheduler-bridge-421834" [1c250e7e-aa0e-4500-90b8-ab40d07e0806] Running
	I0926 23:53:52.623181   66389 system_pods.go:89] "storage-provisioner" [d6f2c195-dcfe-4d02-9f7d-d41adcd6dd65] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:53:52.623191   66389 system_pods.go:126] duration metric: took 220.956849ms to wait for k8s-apps to be running ...
	I0926 23:53:52.623206   66389 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 23:53:52.623255   66389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:53:52.651319   66389 system_svc.go:56] duration metric: took 28.102795ms WaitForService to wait for kubelet
	I0926 23:53:52.651349   66389 kubeadm.go:586] duration metric: took 2.411767704s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 23:53:52.651365   66389 node_conditions.go:102] verifying NodePressure condition ...
	I0926 23:53:52.657789   66389 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 23:53:52.657814   66389 node_conditions.go:123] node cpu capacity is 2
	I0926 23:53:52.657855   66389 node_conditions.go:105] duration metric: took 6.485077ms to run NodePressure ...
	I0926 23:53:52.657868   66389 start.go:241] waiting for startup goroutines ...
	I0926 23:53:52.657875   66389 start.go:246] waiting for cluster config update ...
	I0926 23:53:52.657885   66389 start.go:255] writing updated cluster config ...
	I0926 23:53:52.658164   66389 ssh_runner.go:195] Run: rm -f paused
	I0926 23:53:52.672097   66389 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:53:52.678053   66389 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-49fzk" in "kube-system" namespace to be "Ready" or be gone ...
	W0926 23:53:54.685216   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:53:57.185562   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:53:59.687662   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:02.184172   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:04.186722   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:06.186876   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:08.685625   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:10.686610   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:12.687726   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:15.185017   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:17.185233   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:19.185655   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:21.192607   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:23.685450   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:26.185796   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:28.185952   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	I0926 23:54:30.187086   66389 pod_ready.go:94] pod "coredns-66bc5c9577-49fzk" is "Ready"
	I0926 23:54:30.187117   66389 pod_ready.go:86] duration metric: took 37.509023928s for pod "coredns-66bc5c9577-49fzk" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:30.187131   66389 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xw5nt" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:30.189279   66389 pod_ready.go:99] pod "coredns-66bc5c9577-xw5nt" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-xw5nt" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-xw5nt" not found
	I0926 23:54:30.189299   66389 pod_ready.go:86] duration metric: took 2.161005ms for pod "coredns-66bc5c9577-xw5nt" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:30.192820   66389 pod_ready.go:83] waiting for pod "etcd-bridge-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:30.197725   66389 pod_ready.go:94] pod "etcd-bridge-421834" is "Ready"
	I0926 23:54:30.197751   66389 pod_ready.go:86] duration metric: took 4.89165ms for pod "etcd-bridge-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:30.201190   66389 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:30.205123   66389 pod_ready.go:94] pod "kube-apiserver-bridge-421834" is "Ready"
	I0926 23:54:30.205149   66389 pod_ready.go:86] duration metric: took 3.936999ms for pod "kube-apiserver-bridge-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:30.207292   66389 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:30.582600   66389 pod_ready.go:94] pod "kube-controller-manager-bridge-421834" is "Ready"
	I0926 23:54:30.582626   66389 pod_ready.go:86] duration metric: took 375.315209ms for pod "kube-controller-manager-bridge-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:30.782631   66389 pod_ready.go:83] waiting for pod "kube-proxy-x9dj6" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:31.183190   66389 pod_ready.go:94] pod "kube-proxy-x9dj6" is "Ready"
	I0926 23:54:31.183217   66389 pod_ready.go:86] duration metric: took 400.559229ms for pod "kube-proxy-x9dj6" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:31.384947   66389 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:31.783377   66389 pod_ready.go:94] pod "kube-scheduler-bridge-421834" is "Ready"
	I0926 23:54:31.783401   66389 pod_ready.go:86] duration metric: took 398.422954ms for pod "kube-scheduler-bridge-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:31.783412   66389 pod_ready.go:40] duration metric: took 39.111274508s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:54:31.828635   66389 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 23:54:31.830519   66389 out.go:179] * Done! kubectl is now configured to use "bridge-421834" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 27 00:01:57 embed-certs-994238 crio[887]: time="2025-09-27 00:01:57.951065184Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758931317951035586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=889b7a24-c5a5-486e-aada-fae7ec0fe554 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:01:57 embed-certs-994238 crio[887]: time="2025-09-27 00:01:57.951906911Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b514653-de32-4a40-a4cc-6c8ef27db943 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:01:57 embed-certs-994238 crio[887]: time="2025-09-27 00:01:57.951966331Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b514653-de32-4a40-a4cc-6c8ef27db943 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:01:57 embed-certs-994238 crio[887]: time="2025-09-27 00:01:57.952167024Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5325c322b418e0df6f35176b3f5aeba1f6bd1b598a5e7b317ee137b25d2452d,PodSandboxId:c445e9cac1dcfef340df87c7e218d2609e2e4236913d8e9f9d8ae7e87c662283,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1758931153825719100,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-6kgrc,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ecd98bba-a3d7-4bea-aa51-e341fb975527,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c71795d8becf30d65fcd1f8b7adfa9b3fc321ffd2ec7243cd813f2c5f096f4b5,PodSandboxId:8c1501f78c3823bbe9dbcd21d6c98b9ab311436777802ee6c275aca88b1bd44f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758930803680150171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8df1ce95-e9fb-4055-b0b2-1cba8175d80c,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67eb663ec36d3cde173a9100ade61abd7d779cf06b880e3cacaa69dd25c4dcb2,PodSandboxId:8d8e1046ef7dd6222bebe31aa2012098ad966dc405a9f8380e67c08a237f8630,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758930791230260177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4c698e-9546-4a16-9319-156239442417,},Annotations:map[string]string{io.
kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2abc109f0d27c6d2f5c20aa4429a0852eb7ac16d39609db9b8af781053c1f39,PodSandboxId:3ac27b2b2310719da59887987e97a0fe651965c989fd22e1ad82b4b149458c3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758930767770242813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2bp42,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fd6329-c2a2-4889-aadd-16436fea9fa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c257886ddfab73c46628c5c6cdb1271b5c9afb3639019fbefe26c7af272f819,PodSandboxId:8d8e1046ef7dd6222bebe31aa2012098ad966dc405a9f8380e67c08a237f8630,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867
d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758930760409703986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4c698e-9546-4a16-9319-156239442417,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110a56d3c6a02ad2751fa18e910a6697249fe651168060c235f3e1b24104746,PodSandboxId:6e61bfadd8735788b718d084a1e2b26371b301242af9d57928e6798512eefec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,
State:CONTAINER_RUNNING,CreatedAt:1758930760394041601,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26dzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6f48ab8-1b63-4c01-bab6-cb0962763b4a,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8dd25e029012d2f44f7e2da77fdb0b8662789df2d0a107ad874d2db1abec8b4,PodSandboxId:1cf2d0cd78ed6a4668746bf0c80754bfdd3005bb7b27e8e45d78480cfedb4bda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,Create
dAt:1758930754960723313,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a7d38056283e66eab9810b9763a9a84,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2064521c7e43dcd5a01744b17ef543a0fcea126d5136818f5467db3fd843a708,PodSandboxId:ac1329cc6d82c7b021057be4c0f25700064055646ce76cfff32b1d9822272035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758930754936260164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67ad5b3639dc2d0ef7211c318ff3cec,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:459f9669b0d52f2daa02064c61ac476b12a35a2d3e63320b0114bd6c1ea91282,PodSandboxId:45d86926188eac76eb47281704b3a99fbf874c49e5e77c94da3d59aef2644ace,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:
5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758930754978753150,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed81a0af0e8e21f6f9d3352b1db239d7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb206142ca732fa558448af09c2ae58e659129be2cbc504fe86e8157c2dd6a7,PodSandboxId:6e718e678d86fe7991b81e6b13b59f11e17069b28a1cbbfcdfaf529d473
ec525,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758930754909305217,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4798d6709d5d1af1dddaa09e0563abcd,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:
74" id=7b514653-de32-4a40-a4cc-6c8ef27db943 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.001059223Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dee51ecf-94e3-4798-ba9a-b94df66dbdc3 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.001135504Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dee51ecf-94e3-4798-ba9a-b94df66dbdc3 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.002931018Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3681c20-0438-4fb1-bbad-c6100aa4d0f9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.003883431Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758931318003852651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3681c20-0438-4fb1-bbad-c6100aa4d0f9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.005247913Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59abe3e7-ef9a-4715-9749-6ce3b75b5c68 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.005481722Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59abe3e7-ef9a-4715-9749-6ce3b75b5c68 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.005699007Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5325c322b418e0df6f35176b3f5aeba1f6bd1b598a5e7b317ee137b25d2452d,PodSandboxId:c445e9cac1dcfef340df87c7e218d2609e2e4236913d8e9f9d8ae7e87c662283,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1758931153825719100,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-6kgrc,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ecd98bba-a3d7-4bea-aa51-e341fb975527,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c71795d8becf30d65fcd1f8b7adfa9b3fc321ffd2ec7243cd813f2c5f096f4b5,PodSandboxId:8c1501f78c3823bbe9dbcd21d6c98b9ab311436777802ee6c275aca88b1bd44f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758930803680150171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8df1ce95-e9fb-4055-b0b2-1cba8175d80c,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67eb663ec36d3cde173a9100ade61abd7d779cf06b880e3cacaa69dd25c4dcb2,PodSandboxId:8d8e1046ef7dd6222bebe31aa2012098ad966dc405a9f8380e67c08a237f8630,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758930791230260177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4c698e-9546-4a16-9319-156239442417,},Annotations:map[string]string{io.
kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2abc109f0d27c6d2f5c20aa4429a0852eb7ac16d39609db9b8af781053c1f39,PodSandboxId:3ac27b2b2310719da59887987e97a0fe651965c989fd22e1ad82b4b149458c3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758930767770242813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2bp42,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fd6329-c2a2-4889-aadd-16436fea9fa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c257886ddfab73c46628c5c6cdb1271b5c9afb3639019fbefe26c7af272f819,PodSandboxId:8d8e1046ef7dd6222bebe31aa2012098ad966dc405a9f8380e67c08a237f8630,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867
d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758930760409703986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4c698e-9546-4a16-9319-156239442417,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110a56d3c6a02ad2751fa18e910a6697249fe651168060c235f3e1b24104746,PodSandboxId:6e61bfadd8735788b718d084a1e2b26371b301242af9d57928e6798512eefec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,
State:CONTAINER_RUNNING,CreatedAt:1758930760394041601,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26dzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6f48ab8-1b63-4c01-bab6-cb0962763b4a,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8dd25e029012d2f44f7e2da77fdb0b8662789df2d0a107ad874d2db1abec8b4,PodSandboxId:1cf2d0cd78ed6a4668746bf0c80754bfdd3005bb7b27e8e45d78480cfedb4bda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,Create
dAt:1758930754960723313,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a7d38056283e66eab9810b9763a9a84,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2064521c7e43dcd5a01744b17ef543a0fcea126d5136818f5467db3fd843a708,PodSandboxId:ac1329cc6d82c7b021057be4c0f25700064055646ce76cfff32b1d9822272035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758930754936260164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67ad5b3639dc2d0ef7211c318ff3cec,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:459f9669b0d52f2daa02064c61ac476b12a35a2d3e63320b0114bd6c1ea91282,PodSandboxId:45d86926188eac76eb47281704b3a99fbf874c49e5e77c94da3d59aef2644ace,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:
5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758930754978753150,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed81a0af0e8e21f6f9d3352b1db239d7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb206142ca732fa558448af09c2ae58e659129be2cbc504fe86e8157c2dd6a7,PodSandboxId:6e718e678d86fe7991b81e6b13b59f11e17069b28a1cbbfcdfaf529d473
ec525,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758930754909305217,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4798d6709d5d1af1dddaa09e0563abcd,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:
74" id=59abe3e7-ef9a-4715-9749-6ce3b75b5c68 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.049004396Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f2e83162-19df-4052-b0ba-dba1c012b470 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.049352660Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f2e83162-19df-4052-b0ba-dba1c012b470 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.050958125Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d7d21163-c4c1-46e6-be58-0c4a8707a947 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.051844915Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758931318051815277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7d21163-c4c1-46e6-be58-0c4a8707a947 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.053089447Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3c58af0-1bd4-45a4-822e-47a3013f9cde name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.053171377Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3c58af0-1bd4-45a4-822e-47a3013f9cde name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.053392254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5325c322b418e0df6f35176b3f5aeba1f6bd1b598a5e7b317ee137b25d2452d,PodSandboxId:c445e9cac1dcfef340df87c7e218d2609e2e4236913d8e9f9d8ae7e87c662283,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1758931153825719100,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-6kgrc,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ecd98bba-a3d7-4bea-aa51-e341fb975527,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c71795d8becf30d65fcd1f8b7adfa9b3fc321ffd2ec7243cd813f2c5f096f4b5,PodSandboxId:8c1501f78c3823bbe9dbcd21d6c98b9ab311436777802ee6c275aca88b1bd44f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758930803680150171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8df1ce95-e9fb-4055-b0b2-1cba8175d80c,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67eb663ec36d3cde173a9100ade61abd7d779cf06b880e3cacaa69dd25c4dcb2,PodSandboxId:8d8e1046ef7dd6222bebe31aa2012098ad966dc405a9f8380e67c08a237f8630,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758930791230260177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4c698e-9546-4a16-9319-156239442417,},Annotations:map[string]string{io.
kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2abc109f0d27c6d2f5c20aa4429a0852eb7ac16d39609db9b8af781053c1f39,PodSandboxId:3ac27b2b2310719da59887987e97a0fe651965c989fd22e1ad82b4b149458c3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758930767770242813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2bp42,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fd6329-c2a2-4889-aadd-16436fea9fa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c257886ddfab73c46628c5c6cdb1271b5c9afb3639019fbefe26c7af272f819,PodSandboxId:8d8e1046ef7dd6222bebe31aa2012098ad966dc405a9f8380e67c08a237f8630,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867
d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758930760409703986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4c698e-9546-4a16-9319-156239442417,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110a56d3c6a02ad2751fa18e910a6697249fe651168060c235f3e1b24104746,PodSandboxId:6e61bfadd8735788b718d084a1e2b26371b301242af9d57928e6798512eefec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,
State:CONTAINER_RUNNING,CreatedAt:1758930760394041601,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26dzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6f48ab8-1b63-4c01-bab6-cb0962763b4a,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8dd25e029012d2f44f7e2da77fdb0b8662789df2d0a107ad874d2db1abec8b4,PodSandboxId:1cf2d0cd78ed6a4668746bf0c80754bfdd3005bb7b27e8e45d78480cfedb4bda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,Create
dAt:1758930754960723313,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a7d38056283e66eab9810b9763a9a84,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2064521c7e43dcd5a01744b17ef543a0fcea126d5136818f5467db3fd843a708,PodSandboxId:ac1329cc6d82c7b021057be4c0f25700064055646ce76cfff32b1d9822272035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758930754936260164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67ad5b3639dc2d0ef7211c318ff3cec,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:459f9669b0d52f2daa02064c61ac476b12a35a2d3e63320b0114bd6c1ea91282,PodSandboxId:45d86926188eac76eb47281704b3a99fbf874c49e5e77c94da3d59aef2644ace,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:
5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758930754978753150,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed81a0af0e8e21f6f9d3352b1db239d7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb206142ca732fa558448af09c2ae58e659129be2cbc504fe86e8157c2dd6a7,PodSandboxId:6e718e678d86fe7991b81e6b13b59f11e17069b28a1cbbfcdfaf529d473
ec525,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758930754909305217,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4798d6709d5d1af1dddaa09e0563abcd,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:
74" id=d3c58af0-1bd4-45a4-822e-47a3013f9cde name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.095196955Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0884664-c8f1-4bd7-8efd-86d4d2e62b85 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.095641084Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0884664-c8f1-4bd7-8efd-86d4d2e62b85 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.097287273Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=abc585e5-bb2c-485a-b03b-2172c1b7f121 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.097974347Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758931318097786203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=abc585e5-bb2c-485a-b03b-2172c1b7f121 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.098777095Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33ac869a-ea96-4ebd-976c-fe16b587c9a0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.098851181Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33ac869a-ea96-4ebd-976c-fe16b587c9a0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:01:58 embed-certs-994238 crio[887]: time="2025-09-27 00:01:58.099052017Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5325c322b418e0df6f35176b3f5aeba1f6bd1b598a5e7b317ee137b25d2452d,PodSandboxId:c445e9cac1dcfef340df87c7e218d2609e2e4236913d8e9f9d8ae7e87c662283,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1758931153825719100,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-6kgrc,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ecd98bba-a3d7-4bea-aa51-e341fb975527,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c71795d8becf30d65fcd1f8b7adfa9b3fc321ffd2ec7243cd813f2c5f096f4b5,PodSandboxId:8c1501f78c3823bbe9dbcd21d6c98b9ab311436777802ee6c275aca88b1bd44f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758930803680150171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8df1ce95-e9fb-4055-b0b2-1cba8175d80c,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67eb663ec36d3cde173a9100ade61abd7d779cf06b880e3cacaa69dd25c4dcb2,PodSandboxId:8d8e1046ef7dd6222bebe31aa2012098ad966dc405a9f8380e67c08a237f8630,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758930791230260177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4c698e-9546-4a16-9319-156239442417,},Annotations:map[string]string{io.
kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2abc109f0d27c6d2f5c20aa4429a0852eb7ac16d39609db9b8af781053c1f39,PodSandboxId:3ac27b2b2310719da59887987e97a0fe651965c989fd22e1ad82b4b149458c3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758930767770242813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2bp42,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fd6329-c2a2-4889-aadd-16436fea9fa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c257886ddfab73c46628c5c6cdb1271b5c9afb3639019fbefe26c7af272f819,PodSandboxId:8d8e1046ef7dd6222bebe31aa2012098ad966dc405a9f8380e67c08a237f8630,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867
d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758930760409703986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4c698e-9546-4a16-9319-156239442417,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110a56d3c6a02ad2751fa18e910a6697249fe651168060c235f3e1b24104746,PodSandboxId:6e61bfadd8735788b718d084a1e2b26371b301242af9d57928e6798512eefec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,
State:CONTAINER_RUNNING,CreatedAt:1758930760394041601,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26dzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6f48ab8-1b63-4c01-bab6-cb0962763b4a,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8dd25e029012d2f44f7e2da77fdb0b8662789df2d0a107ad874d2db1abec8b4,PodSandboxId:1cf2d0cd78ed6a4668746bf0c80754bfdd3005bb7b27e8e45d78480cfedb4bda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,Create
dAt:1758930754960723313,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a7d38056283e66eab9810b9763a9a84,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2064521c7e43dcd5a01744b17ef543a0fcea126d5136818f5467db3fd843a708,PodSandboxId:ac1329cc6d82c7b021057be4c0f25700064055646ce76cfff32b1d9822272035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758930754936260164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67ad5b3639dc2d0ef7211c318ff3cec,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:459f9669b0d52f2daa02064c61ac476b12a35a2d3e63320b0114bd6c1ea91282,PodSandboxId:45d86926188eac76eb47281704b3a99fbf874c49e5e77c94da3d59aef2644ace,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:
5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758930754978753150,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed81a0af0e8e21f6f9d3352b1db239d7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb206142ca732fa558448af09c2ae58e659129be2cbc504fe86e8157c2dd6a7,PodSandboxId:6e718e678d86fe7991b81e6b13b59f11e17069b28a1cbbfcdfaf529d473
ec525,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758930754909305217,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4798d6709d5d1af1dddaa09e0563abcd,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:
74" id=33ac869a-ea96-4ebd-976c-fe16b587c9a0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	b5325c322b418       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                      2 minutes ago       Exited              dashboard-metrics-scraper   6                   c445e9cac1dcf       dashboard-metrics-scraper-6ffb444bf9-6kgrc
	c71795d8becf3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   8 minutes ago       Running             busybox                     1                   8c1501f78c382       busybox
	67eb663ec36d3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner         2                   8d8e1046ef7dd       storage-provisioner
	f2abc109f0d27       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      9 minutes ago       Running             coredns                     1                   3ac27b2b23107       coredns-66bc5c9577-2bp42
	7c257886ddfab       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner         1                   8d8e1046ef7dd       storage-provisioner
	4110a56d3c6a0       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      9 minutes ago       Running             kube-proxy                  1                   6e61bfadd8735       kube-proxy-26dzh
	459f9669b0d52       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      9 minutes ago       Running             etcd                        1                   45d86926188ea       etcd-embed-certs-994238
	c8dd25e029012       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      9 minutes ago       Running             kube-scheduler              1                   1cf2d0cd78ed6       kube-scheduler-embed-certs-994238
	2064521c7e43d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      9 minutes ago       Running             kube-controller-manager     1                   ac1329cc6d82c       kube-controller-manager-embed-certs-994238
	eeb206142ca73       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      9 minutes ago       Running             kube-apiserver              1                   6e718e678d86f       kube-apiserver-embed-certs-994238
	
	
	==> coredns [f2abc109f0d27c6d2f5c20aa4429a0852eb7ac16d39609db9b8af781053c1f39] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51344 - 11755 "HINFO IN 3138978563286339268.2439510168868357739. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.037962699s
	
	
	==> describe nodes <==
	Name:               embed-certs-994238
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-994238
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=embed-certs-994238
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T23_49_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 23:49:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-994238
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Sep 2025 00:01:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 23:58:36 +0000   Fri, 26 Sep 2025 23:49:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 23:58:36 +0000   Fri, 26 Sep 2025 23:49:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 23:58:36 +0000   Fri, 26 Sep 2025 23:49:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 23:58:36 +0000   Fri, 26 Sep 2025 23:52:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.66
	  Hostname:    embed-certs-994238
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 62d30061cfd044b9b19ed1fea89cb5e1
	  System UUID:                62d30061-cfd0-44b9-b19e-d1fea89cb5e1
	  Boot ID:                    a7877111-27f9-48e9-939a-e7385196adda
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-2bp42                      100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     12m
	  kube-system                 etcd-embed-certs-994238                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         12m
	  kube-system                 kube-apiserver-embed-certs-994238             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-embed-certs-994238    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-26dzh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-embed-certs-994238             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-746fcd58dc-nr4tj               100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-6kgrc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9wwwt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 9m17s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node embed-certs-994238 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node embed-certs-994238 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node embed-certs-994238 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node embed-certs-994238 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node embed-certs-994238 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node embed-certs-994238 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeReady                12m                    kubelet          Node embed-certs-994238 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node embed-certs-994238 event: Registered Node embed-certs-994238 in Controller
	  Normal   Starting                 9m25s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  9m25s (x8 over 9m25s)  kubelet          Node embed-certs-994238 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m25s (x8 over 9m25s)  kubelet          Node embed-certs-994238 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m25s (x7 over 9m25s)  kubelet          Node embed-certs-994238 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m25s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9m19s                  kubelet          Node embed-certs-994238 has been rebooted, boot id: a7877111-27f9-48e9-939a-e7385196adda
	  Normal   RegisteredNode           9m15s                  node-controller  Node embed-certs-994238 event: Registered Node embed-certs-994238 in Controller
	
	
	==> dmesg <==
	[Sep26 23:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001542] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003145] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.786625] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.114348] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.126859] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.797409] kauditd_printk_skb: 196 callbacks suppressed
	[  +0.282526] kauditd_printk_skb: 239 callbacks suppressed
	[  +3.678837] kauditd_printk_skb: 110 callbacks suppressed
	[Sep26 23:53] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.252062] kauditd_printk_skb: 11 callbacks suppressed
	[ +18.469727] kauditd_printk_skb: 49 callbacks suppressed
	[Sep26 23:54] kauditd_printk_skb: 6 callbacks suppressed
	[ +46.997540] kauditd_printk_skb: 6 callbacks suppressed
	[Sep26 23:56] kauditd_printk_skb: 6 callbacks suppressed
	[Sep26 23:59] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [459f9669b0d52f2daa02064c61ac476b12a35a2d3e63320b0114bd6c1ea91282] <==
	{"level":"warn","ts":"2025-09-26T23:52:38.081911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:52:38.092666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:52:38.103286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:52:38.126760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:52:38.133216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:52:38.153878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:52:38.161990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:52:38.171406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:52:38.278282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:52:56.782653Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.772537ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16636747309780032576 > lease_revoke:<id:66e19988715fda8d>","response":"size:28"}
	{"level":"info","ts":"2025-09-26T23:52:56.782835Z","caller":"traceutil/trace.go:172","msg":"trace[517451159] linearizableReadLoop","detail":"{readStateIndex:757; appliedIndex:756; }","duration":"113.173371ms","start":"2025-09-26T23:52:56.669646Z","end":"2025-09-26T23:52:56.782819Z","steps":["trace[517451159] 'read index received'  (duration: 26.204µs)","trace[517451159] 'applied index is now lower than readState.Index'  (duration: 113.146143ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T23:52:56.782903Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.241905ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T23:52:56.782918Z","caller":"traceutil/trace.go:172","msg":"trace[1847920199] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:705; }","duration":"113.271312ms","start":"2025-09-26T23:52:56.669642Z","end":"2025-09-26T23:52:56.782913Z","steps":["trace[1847920199] 'agreement among raft nodes before linearized reading'  (duration: 113.217896ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:52:56.932719Z","caller":"traceutil/trace.go:172","msg":"trace[1397283063] transaction","detail":"{read_only:false; response_revision:706; number_of_response:1; }","duration":"114.677065ms","start":"2025-09-26T23:52:56.818027Z","end":"2025-09-26T23:52:56.932704Z","steps":["trace[1397283063] 'process raft request'  (duration: 114.540397ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:52:57.088245Z","caller":"traceutil/trace.go:172","msg":"trace[852881930] linearizableReadLoop","detail":"{readStateIndex:758; appliedIndex:758; }","duration":"104.51992ms","start":"2025-09-26T23:52:56.983705Z","end":"2025-09-26T23:52:57.088225Z","steps":["trace[852881930] 'read index received'  (duration: 104.513833ms)","trace[852881930] 'applied index is now lower than readState.Index'  (duration: 5.06µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T23:52:57.094106Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.401026ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T23:52:57.094229Z","caller":"traceutil/trace.go:172","msg":"trace[1856526367] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:706; }","duration":"110.542193ms","start":"2025-09-26T23:52:56.983674Z","end":"2025-09-26T23:52:57.094216Z","steps":["trace[1856526367] 'agreement among raft nodes before linearized reading'  (duration: 104.679881ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:52:57.094785Z","caller":"traceutil/trace.go:172","msg":"trace[157560446] transaction","detail":"{read_only:false; response_revision:707; number_of_response:1; }","duration":"269.874187ms","start":"2025-09-26T23:52:56.824900Z","end":"2025-09-26T23:52:57.094775Z","steps":["trace[157560446] 'process raft request'  (duration: 263.493408ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:53:20.171932Z","caller":"traceutil/trace.go:172","msg":"trace[1835746828] linearizableReadLoop","detail":"{readStateIndex:781; appliedIndex:781; }","duration":"189.080712ms","start":"2025-09-26T23:53:19.982835Z","end":"2025-09-26T23:53:20.171915Z","steps":["trace[1835746828] 'read index received'  (duration: 189.076373ms)","trace[1835746828] 'applied index is now lower than readState.Index'  (duration: 3.744µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T23:53:20.172271Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"189.341662ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T23:53:20.172646Z","caller":"traceutil/trace.go:172","msg":"trace[1485630491] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:725; }","duration":"189.740658ms","start":"2025-09-26T23:53:19.982830Z","end":"2025-09-26T23:53:20.172571Z","steps":["trace[1485630491] 'agreement among raft nodes before linearized reading'  (duration: 189.316758ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:53:20.172665Z","caller":"traceutil/trace.go:172","msg":"trace[1818633723] transaction","detail":"{read_only:false; response_revision:726; number_of_response:1; }","duration":"194.00584ms","start":"2025-09-26T23:53:19.978643Z","end":"2025-09-26T23:53:20.172648Z","steps":["trace[1818633723] 'process raft request'  (duration: 193.867924ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T23:53:31.806113Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.418678ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T23:53:31.806220Z","caller":"traceutil/trace.go:172","msg":"trace[425644245] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:756; }","duration":"141.539345ms","start":"2025-09-26T23:53:31.664664Z","end":"2025-09-26T23:53:31.806203Z","steps":["trace[425644245] 'range keys from in-memory index tree'  (duration: 141.254113ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:53:50.726922Z","caller":"traceutil/trace.go:172","msg":"trace[2005300218] transaction","detail":"{read_only:false; response_revision:782; number_of_response:1; }","duration":"102.646556ms","start":"2025-09-26T23:53:50.624257Z","end":"2025-09-26T23:53:50.726904Z","steps":["trace[2005300218] 'process raft request'  (duration: 102.529043ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:01:58 up 9 min,  0 users,  load average: 0.23, 0.31, 0.24
	Linux embed-certs-994238 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [eeb206142ca732fa558448af09c2ae58e659129be2cbc504fe86e8157c2dd6a7] <==
	I0926 23:58:18.174130       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0926 23:58:40.214205       1 handler_proxy.go:99] no RequestInfo found in the context
	E0926 23:58:40.214337       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0926 23:58:40.214348       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0926 23:58:40.215713       1 handler_proxy.go:99] no RequestInfo found in the context
	E0926 23:58:40.215762       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0926 23:58:40.215785       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0926 23:58:44.325006       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 23:59:32.907575       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 23:59:45.466708       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0927 00:00:40.214845       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 00:00:40.215028       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0927 00:00:40.215046       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0927 00:00:40.216668       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 00:00:40.216903       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0927 00:00:40.217290       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 00:00:45.685730       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0927 00:01:04.239117       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0927 00:01:53.336512       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [2064521c7e43dcd5a01744b17ef543a0fcea126d5136818f5467db3fd843a708] <==
	I0926 23:55:43.861189       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:56:13.764096       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:56:13.870099       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:56:43.770114       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:56:43.881241       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:57:13.775153       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:57:13.891093       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:57:43.781208       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:57:43.901767       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:58:13.785681       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:58:13.910936       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:58:43.792101       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:58:43.918389       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:59:13.798071       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:59:13.929958       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:59:43.804156       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:59:43.939109       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0927 00:00:13.813284       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 00:00:13.949013       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0927 00:00:43.821020       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 00:00:43.963730       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0927 00:01:13.826052       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 00:01:13.980523       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0927 00:01:43.831286       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 00:01:43.995091       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [4110a56d3c6a02ad2751fa18e910a6697249fe651168060c235f3e1b24104746] <==
	I0926 23:52:40.606559       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 23:52:40.707795       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 23:52:40.707840       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.66"]
	E0926 23:52:40.707947       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 23:52:40.754693       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0926 23:52:40.754740       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0926 23:52:40.754767       1 server_linux.go:132] "Using iptables Proxier"
	I0926 23:52:40.766680       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 23:52:40.767204       1 server.go:527] "Version info" version="v1.34.0"
	I0926 23:52:40.767266       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 23:52:40.773886       1 config.go:200] "Starting service config controller"
	I0926 23:52:40.773969       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 23:52:40.773999       1 config.go:106] "Starting endpoint slice config controller"
	I0926 23:52:40.774005       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 23:52:40.774032       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 23:52:40.774064       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 23:52:40.775177       1 config.go:309] "Starting node config controller"
	I0926 23:52:40.775220       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 23:52:40.775231       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 23:52:40.875125       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0926 23:52:40.875216       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 23:52:40.875268       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c8dd25e029012d2f44f7e2da77fdb0b8662789df2d0a107ad874d2db1abec8b4] <==
	I0926 23:52:36.618741       1 serving.go:386] Generated self-signed cert in-memory
	W0926 23:52:39.183358       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0926 23:52:39.183560       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0926 23:52:39.183594       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0926 23:52:39.183674       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0926 23:52:39.238696       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0926 23:52:39.238745       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 23:52:39.242687       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0926 23:52:39.243025       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0926 23:52:39.243652       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 23:52:39.243749       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 23:52:39.344350       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 27 00:01:03 embed-certs-994238 kubelet[1219]: E0927 00:01:03.962801    1219 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758931263962232424  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 27 00:01:03 embed-certs-994238 kubelet[1219]: E0927 00:01:03.962832    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758931263962232424  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 27 00:01:07 embed-certs-994238 kubelet[1219]: E0927 00:01:07.809272    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nr4tj" podUID="d537925e-684b-44b3-b200-4a721ee32ca7"
	Sep 27 00:01:07 embed-certs-994238 kubelet[1219]: E0927 00:01:07.809302    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9wwwt" podUID="765bffdb-42c1-4742-b6f6-448a5ca12c32"
	Sep 27 00:01:13 embed-certs-994238 kubelet[1219]: E0927 00:01:13.967428    1219 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758931273964764081  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 27 00:01:13 embed-certs-994238 kubelet[1219]: E0927 00:01:13.967515    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758931273964764081  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 27 00:01:15 embed-certs-994238 kubelet[1219]: I0927 00:01:15.807531    1219 scope.go:117] "RemoveContainer" containerID="b5325c322b418e0df6f35176b3f5aeba1f6bd1b598a5e7b317ee137b25d2452d"
	Sep 27 00:01:15 embed-certs-994238 kubelet[1219]: E0927 00:01:15.807700    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6kgrc_kubernetes-dashboard(ecd98bba-a3d7-4bea-aa51-e341fb975527)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6kgrc" podUID="ecd98bba-a3d7-4bea-aa51-e341fb975527"
	Sep 27 00:01:21 embed-certs-994238 kubelet[1219]: E0927 00:01:21.809535    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nr4tj" podUID="d537925e-684b-44b3-b200-4a721ee32ca7"
	Sep 27 00:01:23 embed-certs-994238 kubelet[1219]: E0927 00:01:23.969120    1219 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758931283968720969  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 27 00:01:23 embed-certs-994238 kubelet[1219]: E0927 00:01:23.969145    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758931283968720969  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 27 00:01:26 embed-certs-994238 kubelet[1219]: I0927 00:01:26.806901    1219 scope.go:117] "RemoveContainer" containerID="b5325c322b418e0df6f35176b3f5aeba1f6bd1b598a5e7b317ee137b25d2452d"
	Sep 27 00:01:26 embed-certs-994238 kubelet[1219]: E0927 00:01:26.807119    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6kgrc_kubernetes-dashboard(ecd98bba-a3d7-4bea-aa51-e341fb975527)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6kgrc" podUID="ecd98bba-a3d7-4bea-aa51-e341fb975527"
	Sep 27 00:01:33 embed-certs-994238 kubelet[1219]: E0927 00:01:33.811216    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nr4tj" podUID="d537925e-684b-44b3-b200-4a721ee32ca7"
	Sep 27 00:01:33 embed-certs-994238 kubelet[1219]: E0927 00:01:33.972124    1219 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758931293971265677  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 27 00:01:33 embed-certs-994238 kubelet[1219]: E0927 00:01:33.972201    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758931293971265677  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 27 00:01:38 embed-certs-994238 kubelet[1219]: I0927 00:01:38.806893    1219 scope.go:117] "RemoveContainer" containerID="b5325c322b418e0df6f35176b3f5aeba1f6bd1b598a5e7b317ee137b25d2452d"
	Sep 27 00:01:38 embed-certs-994238 kubelet[1219]: E0927 00:01:38.807080    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6kgrc_kubernetes-dashboard(ecd98bba-a3d7-4bea-aa51-e341fb975527)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6kgrc" podUID="ecd98bba-a3d7-4bea-aa51-e341fb975527"
	Sep 27 00:01:43 embed-certs-994238 kubelet[1219]: E0927 00:01:43.974165    1219 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758931303973657700  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 27 00:01:43 embed-certs-994238 kubelet[1219]: E0927 00:01:43.974192    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758931303973657700  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 27 00:01:48 embed-certs-994238 kubelet[1219]: E0927 00:01:48.809607    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nr4tj" podUID="d537925e-684b-44b3-b200-4a721ee32ca7"
	Sep 27 00:01:50 embed-certs-994238 kubelet[1219]: I0927 00:01:50.806687    1219 scope.go:117] "RemoveContainer" containerID="b5325c322b418e0df6f35176b3f5aeba1f6bd1b598a5e7b317ee137b25d2452d"
	Sep 27 00:01:50 embed-certs-994238 kubelet[1219]: E0927 00:01:50.806875    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6kgrc_kubernetes-dashboard(ecd98bba-a3d7-4bea-aa51-e341fb975527)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6kgrc" podUID="ecd98bba-a3d7-4bea-aa51-e341fb975527"
	Sep 27 00:01:53 embed-certs-994238 kubelet[1219]: E0927 00:01:53.976207    1219 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758931313975838050  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 27 00:01:53 embed-certs-994238 kubelet[1219]: E0927 00:01:53.976251    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758931313975838050  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	
	
	==> storage-provisioner [67eb663ec36d3cde173a9100ade61abd7d779cf06b880e3cacaa69dd25c4dcb2] <==
	W0927 00:01:33.642027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:35.645960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:35.652154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:37.656383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:37.667753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:39.672176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:39.678904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:41.682823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:41.692915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:43.696611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:43.702867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:45.708302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:45.715182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:47.719079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:47.725157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:49.730278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:49.737845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:51.742102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:51.749342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:53.752341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:53.758495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:55.762858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:55.772862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:57.778311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:01:57.784242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [7c257886ddfab73c46628c5c6cdb1271b5c9afb3639019fbefe26c7af272f819] <==
	I0926 23:52:40.535748       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0926 23:53:10.539307       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-994238 -n embed-certs-994238
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-994238 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-nr4tj kubernetes-dashboard-855c9754f9-9wwwt
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-994238 describe pod metrics-server-746fcd58dc-nr4tj kubernetes-dashboard-855c9754f9-9wwwt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-994238 describe pod metrics-server-746fcd58dc-nr4tj kubernetes-dashboard-855c9754f9-9wwwt: exit status 1 (64.096572ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-nr4tj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-9wwwt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-994238 describe pod metrics-server-746fcd58dc-nr4tj kubernetes-dashboard-855c9754f9-9wwwt: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9wwwt" [765bffdb-42c1-4742-b6f6-448a5ca12c32] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0927 00:02:16.179412    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:02:34.193202    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/custom-flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:03:01.896425    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/custom-flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:03:15.937612    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/enable-default-cni-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:03:41.885374    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:03:43.639022    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/enable-default-cni-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:04:09.588643    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:04:32.318913    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:05:00.020898    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:05:26.313622    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:05:52.987272    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/kindnet-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:05:57.510410    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/auto-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:06:23.450589    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/calico-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:06:28.494683    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/no-preload-534592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:06:32.969617    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:06:34.673373    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/default-k8s-diff-port-059658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:06:49.382506    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:06:51.007806    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:07:34.193549    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/custom-flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:07:51.560880    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/no-preload-534592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:07:57.736716    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/default-k8s-diff-port-059658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:08:14.086046    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:08:15.937898    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/enable-default-cni-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:08:41.885783    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:09:32.318990    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:09:36.045315    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:10:26.313616    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:10:52.987784    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/kindnet-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:10:57.509717    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/auto-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-994238 -n embed-certs-994238
start_stop_delete_test.go:285: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-27 00:10:59.620923229 +0000 UTC m=+6125.451322699
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-994238 describe po kubernetes-dashboard-855c9754f9-9wwwt -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context embed-certs-994238 describe po kubernetes-dashboard-855c9754f9-9wwwt -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-9wwwt
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-994238/192.168.72.66
Start Time:       Fri, 26 Sep 2025 23:52:44 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w464s (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-w464s:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9wwwt to embed-certs-994238
Warning  Failed     17m                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     17m                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    12m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     12m (x5 over 17m)     kubelet            Error: ErrImagePull
Warning  Failed     12m (x3 over 16m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    3m9s (x46 over 17m)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     2m34s (x49 over 17m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-994238 logs kubernetes-dashboard-855c9754f9-9wwwt -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-994238 logs kubernetes-dashboard-855c9754f9-9wwwt -n kubernetes-dashboard: exit status 1 (81.494804ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-9wwwt" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context embed-certs-994238 logs kubernetes-dashboard-855c9754f9-9wwwt -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-994238 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-994238 -n embed-certs-994238
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-994238 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-994238 logs -n 25: (1.401209386s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-421834 sudo iptables -t nat -L -n -v                                 │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo systemctl status kubelet --all --full --no-pager         │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo systemctl cat kubelet --no-pager                         │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo journalctl -xeu kubelet --all --full --no-pager          │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo cat /etc/kubernetes/kubelet.conf                         │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo cat /var/lib/kubelet/config.yaml                         │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo systemctl status docker --all --full --no-pager          │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │                     │
	│ ssh     │ -p bridge-421834 sudo systemctl cat docker --no-pager                          │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo cat /etc/docker/daemon.json                              │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo docker system info                                       │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │                     │
	│ ssh     │ -p bridge-421834 sudo systemctl status cri-docker --all --full --no-pager      │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │                     │
	│ ssh     │ -p bridge-421834 sudo systemctl cat cri-docker --no-pager                      │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │                     │
	│ ssh     │ -p bridge-421834 sudo cat /usr/lib/systemd/system/cri-docker.service           │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo cri-dockerd --version                                    │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo systemctl status containerd --all --full --no-pager      │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │                     │
	│ ssh     │ -p bridge-421834 sudo systemctl cat containerd --no-pager                      │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo cat /lib/systemd/system/containerd.service               │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo cat /etc/containerd/config.toml                          │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo containerd config dump                                   │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo systemctl status crio --all --full --no-pager            │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo systemctl cat crio --no-pager                            │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ ssh     │ -p bridge-421834 sudo crio config                                              │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	│ delete  │ -p bridge-421834                                                               │ bridge-421834 │ jenkins │ v1.37.0 │ 26 Sep 25 23:54 UTC │ 26 Sep 25 23:54 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 23:53:03
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 23:53:03.230222   66389 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:53:03.230606   66389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:53:03.230625   66389 out.go:374] Setting ErrFile to fd 2...
	I0926 23:53:03.230632   66389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:53:03.231015   66389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	I0926 23:53:03.231745   66389 out.go:368] Setting JSON to false
	I0926 23:53:03.233328   66389 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5728,"bootTime":1758925055,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 23:53:03.233417   66389 start.go:140] virtualization: kvm guest
	I0926 23:53:03.235488   66389 out.go:179] * [bridge-421834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 23:53:03.236968   66389 notify.go:220] Checking for updates...
	I0926 23:53:03.236990   66389 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 23:53:03.238477   66389 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 23:53:03.239701   66389 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 23:53:03.241110   66389 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 23:53:03.242715   66389 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 23:53:03.244044   66389 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 23:53:03.246323   66389 config.go:182] Loaded profile config "embed-certs-994238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:53:03.246463   66389 config.go:182] Loaded profile config "enable-default-cni-421834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:53:03.246577   66389 config.go:182] Loaded profile config "flannel-421834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:53:03.246697   66389 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 23:53:03.285672   66389 out.go:179] * Using the kvm2 driver based on user configuration
	I0926 23:53:03.286916   66389 start.go:304] selected driver: kvm2
	I0926 23:53:03.286939   66389 start.go:924] validating driver "kvm2" against <nil>
	I0926 23:53:03.286956   66389 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 23:53:03.288092   66389 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 23:53:03.288200   66389 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 23:53:03.304645   66389 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 23:53:03.304690   66389 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 23:53:03.321296   66389 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 23:53:03.321352   66389 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 23:53:03.321741   66389 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 23:53:03.321798   66389 cni.go:84] Creating CNI manager for "bridge"
	I0926 23:53:03.321812   66389 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 23:53:03.321891   66389 start.go:348] cluster config:
	{Name:bridge-421834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-421834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:
1m0s}
	I0926 23:53:03.322027   66389 iso.go:125] acquiring lock: {Name:mk665cb8117fd96bfc46b1e5a29611848cf59d97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 23:53:03.323954   66389 out.go:179] * Starting "bridge-421834" primary control-plane node in "bridge-421834" cluster
	I0926 23:53:03.325338   66389 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 23:53:03.325392   66389 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0926 23:53:03.325406   66389 cache.go:58] Caching tarball of preloaded images
	I0926 23:53:03.325548   66389 preload.go:172] Found /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0926 23:53:03.325566   66389 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0926 23:53:03.325711   66389 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/config.json ...
	I0926 23:53:03.325746   66389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/config.json: {Name:mkc3cbb36558969d3f714e3524b9d6df6545a49f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:03.326026   66389 start.go:360] acquireMachinesLock for bridge-421834: {Name:mk2abc374bcfc09d0b998f1b70bb443182c23d46 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 23:53:03.326086   66389 start.go:364] duration metric: took 37.574µs to acquireMachinesLock for "bridge-421834"
	I0926 23:53:03.326111   66389 start.go:93] Provisioning new machine with config: &{Name:bridge-421834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:bridge-421834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 23:53:03.326176   66389 start.go:125] createHost starting for "" (driver="kvm2")
	W0926 23:53:04.650091   62447 pod_ready.go:104] pod "coredns-66bc5c9577-b2hgd" is not "Ready", error: <nil>
	W0926 23:53:07.151106   62447 pod_ready.go:104] pod "coredns-66bc5c9577-b2hgd" is not "Ready", error: <nil>
	I0926 23:53:03.327735   66389 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0926 23:53:03.327974   66389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:03.328034   66389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:03.342698   66389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33427
	I0926 23:53:03.343259   66389 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:03.343858   66389 main.go:141] libmachine: Using API Version  1
	I0926 23:53:03.343896   66389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:03.344286   66389 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:03.344651   66389 main.go:141] libmachine: (bridge-421834) Calling .GetMachineName
	I0926 23:53:03.344846   66389 main.go:141] libmachine: (bridge-421834) Calling .DriverName
	I0926 23:53:03.345076   66389 start.go:159] libmachine.API.Create for "bridge-421834" (driver="kvm2")
	I0926 23:53:03.345129   66389 client.go:168] LocalClient.Create starting
	I0926 23:53:03.345171   66389 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem
	I0926 23:53:03.345218   66389 main.go:141] libmachine: Decoding PEM data...
	I0926 23:53:03.345238   66389 main.go:141] libmachine: Parsing certificate...
	I0926 23:53:03.345312   66389 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem
	I0926 23:53:03.345344   66389 main.go:141] libmachine: Decoding PEM data...
	I0926 23:53:03.345373   66389 main.go:141] libmachine: Parsing certificate...
	I0926 23:53:03.345402   66389 main.go:141] libmachine: Running pre-create checks...
	I0926 23:53:03.345413   66389 main.go:141] libmachine: (bridge-421834) Calling .PreCreateCheck
	I0926 23:53:03.345724   66389 main.go:141] libmachine: (bridge-421834) Calling .GetConfigRaw
	I0926 23:53:03.346239   66389 main.go:141] libmachine: Creating machine...
	I0926 23:53:03.346261   66389 main.go:141] libmachine: (bridge-421834) Calling .Create
	I0926 23:53:03.346383   66389 main.go:141] libmachine: (bridge-421834) creating domain...
	I0926 23:53:03.346403   66389 main.go:141] libmachine: (bridge-421834) creating network...
	I0926 23:53:03.348128   66389 main.go:141] libmachine: (bridge-421834) DBG | found existing default network
	I0926 23:53:03.348334   66389 main.go:141] libmachine: (bridge-421834) DBG | <network connections='3'>
	I0926 23:53:03.348355   66389 main.go:141] libmachine: (bridge-421834) DBG |   <name>default</name>
	I0926 23:53:03.348368   66389 main.go:141] libmachine: (bridge-421834) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0926 23:53:03.348383   66389 main.go:141] libmachine: (bridge-421834) DBG |   <forward mode='nat'>
	I0926 23:53:03.348395   66389 main.go:141] libmachine: (bridge-421834) DBG |     <nat>
	I0926 23:53:03.348408   66389 main.go:141] libmachine: (bridge-421834) DBG |       <port start='1024' end='65535'/>
	I0926 23:53:03.348421   66389 main.go:141] libmachine: (bridge-421834) DBG |     </nat>
	I0926 23:53:03.348436   66389 main.go:141] libmachine: (bridge-421834) DBG |   </forward>
	I0926 23:53:03.348452   66389 main.go:141] libmachine: (bridge-421834) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0926 23:53:03.348466   66389 main.go:141] libmachine: (bridge-421834) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0926 23:53:03.348476   66389 main.go:141] libmachine: (bridge-421834) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0926 23:53:03.348482   66389 main.go:141] libmachine: (bridge-421834) DBG |     <dhcp>
	I0926 23:53:03.348490   66389 main.go:141] libmachine: (bridge-421834) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0926 23:53:03.348500   66389 main.go:141] libmachine: (bridge-421834) DBG |     </dhcp>
	I0926 23:53:03.348507   66389 main.go:141] libmachine: (bridge-421834) DBG |   </ip>
	I0926 23:53:03.348514   66389 main.go:141] libmachine: (bridge-421834) DBG | </network>
	I0926 23:53:03.348522   66389 main.go:141] libmachine: (bridge-421834) DBG | 
	I0926 23:53:03.349251   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:03.349100   66416 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:95:3f:8a} reservation:<nil>}
	I0926 23:53:03.349892   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:03.349767   66416 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:e6:92:05} reservation:<nil>}
	I0926 23:53:03.350594   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:03.350504   66416 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000292aa0}
	I0926 23:53:03.350611   66389 main.go:141] libmachine: (bridge-421834) DBG | defining private network:
	I0926 23:53:03.350674   66389 main.go:141] libmachine: (bridge-421834) DBG | 
	I0926 23:53:03.350700   66389 main.go:141] libmachine: (bridge-421834) DBG | <network>
	I0926 23:53:03.350711   66389 main.go:141] libmachine: (bridge-421834) DBG |   <name>mk-bridge-421834</name>
	I0926 23:53:03.350720   66389 main.go:141] libmachine: (bridge-421834) DBG |   <dns enable='no'/>
	I0926 23:53:03.350734   66389 main.go:141] libmachine: (bridge-421834) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0926 23:53:03.350756   66389 main.go:141] libmachine: (bridge-421834) DBG |     <dhcp>
	I0926 23:53:03.350770   66389 main.go:141] libmachine: (bridge-421834) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0926 23:53:03.350780   66389 main.go:141] libmachine: (bridge-421834) DBG |     </dhcp>
	I0926 23:53:03.350788   66389 main.go:141] libmachine: (bridge-421834) DBG |   </ip>
	I0926 23:53:03.350801   66389 main.go:141] libmachine: (bridge-421834) DBG | </network>
	I0926 23:53:03.350811   66389 main.go:141] libmachine: (bridge-421834) DBG | 
	I0926 23:53:03.356908   66389 main.go:141] libmachine: (bridge-421834) DBG | creating private network mk-bridge-421834 192.168.61.0/24...
	I0926 23:53:03.447178   66389 main.go:141] libmachine: (bridge-421834) DBG | private network mk-bridge-421834 192.168.61.0/24 created
	I0926 23:53:03.447463   66389 main.go:141] libmachine: (bridge-421834) DBG | <network>
	I0926 23:53:03.447481   66389 main.go:141] libmachine: (bridge-421834) DBG |   <name>mk-bridge-421834</name>
	I0926 23:53:03.447494   66389 main.go:141] libmachine: (bridge-421834) setting up store path in /home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834 ...
	I0926 23:53:03.447503   66389 main.go:141] libmachine: (bridge-421834) DBG |   <uuid>20995e88-23b6-4a61-b3dc-4476e2fed59a</uuid>
	I0926 23:53:03.447514   66389 main.go:141] libmachine: (bridge-421834) DBG |   <bridge name='virbr3' stp='on' delay='0'/>
	I0926 23:53:03.447521   66389 main.go:141] libmachine: (bridge-421834) DBG |   <mac address='52:54:00:85:bd:a6'/>
	I0926 23:53:03.447530   66389 main.go:141] libmachine: (bridge-421834) DBG |   <dns enable='no'/>
	I0926 23:53:03.447539   66389 main.go:141] libmachine: (bridge-421834) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0926 23:53:03.447565   66389 main.go:141] libmachine: (bridge-421834) building disk image from file:///home/jenkins/minikube-integration/21642-6020/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0926 23:53:03.447593   66389 main.go:141] libmachine: (bridge-421834) Downloading /home/jenkins/minikube-integration/21642-6020/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21642-6020/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I0926 23:53:03.447605   66389 main.go:141] libmachine: (bridge-421834) DBG |     <dhcp>
	I0926 23:53:03.447624   66389 main.go:141] libmachine: (bridge-421834) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0926 23:53:03.447637   66389 main.go:141] libmachine: (bridge-421834) DBG |     </dhcp>
	I0926 23:53:03.447648   66389 main.go:141] libmachine: (bridge-421834) DBG |   </ip>
	I0926 23:53:03.447661   66389 main.go:141] libmachine: (bridge-421834) DBG | </network>
	I0926 23:53:03.447670   66389 main.go:141] libmachine: (bridge-421834) DBG | 
	I0926 23:53:03.447694   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:03.447443   66416 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 23:53:03.710861   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:03.710725   66416 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/id_rsa...
	I0926 23:53:03.942057   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:03.941915   66416 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/bridge-421834.rawdisk...
	I0926 23:53:03.942095   66389 main.go:141] libmachine: (bridge-421834) DBG | Writing magic tar header
	I0926 23:53:03.942116   66389 main.go:141] libmachine: (bridge-421834) DBG | Writing SSH key tar header
	I0926 23:53:03.942190   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:03.942133   66416 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834 ...
	I0926 23:53:03.942272   66389 main.go:141] libmachine: (bridge-421834) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834
	I0926 23:53:03.942295   66389 main.go:141] libmachine: (bridge-421834) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21642-6020/.minikube/machines
	I0926 23:53:03.942313   66389 main.go:141] libmachine: (bridge-421834) setting executable bit set on /home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834 (perms=drwx------)
	I0926 23:53:03.942327   66389 main.go:141] libmachine: (bridge-421834) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 23:53:03.942337   66389 main.go:141] libmachine: (bridge-421834) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21642-6020
	I0926 23:53:03.942346   66389 main.go:141] libmachine: (bridge-421834) setting executable bit set on /home/jenkins/minikube-integration/21642-6020/.minikube/machines (perms=drwxr-xr-x)
	I0926 23:53:03.942355   66389 main.go:141] libmachine: (bridge-421834) setting executable bit set on /home/jenkins/minikube-integration/21642-6020/.minikube (perms=drwxr-xr-x)
	I0926 23:53:03.942363   66389 main.go:141] libmachine: (bridge-421834) setting executable bit set on /home/jenkins/minikube-integration/21642-6020 (perms=drwxrwxr-x)
	I0926 23:53:03.942375   66389 main.go:141] libmachine: (bridge-421834) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0926 23:53:03.942399   66389 main.go:141] libmachine: (bridge-421834) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0926 23:53:03.942411   66389 main.go:141] libmachine: (bridge-421834) DBG | checking permissions on dir: /home/jenkins
	I0926 23:53:03.942420   66389 main.go:141] libmachine: (bridge-421834) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0926 23:53:03.942432   66389 main.go:141] libmachine: (bridge-421834) DBG | checking permissions on dir: /home
	I0926 23:53:03.942446   66389 main.go:141] libmachine: (bridge-421834) DBG | skipping /home - not owner
	I0926 23:53:03.942456   66389 main.go:141] libmachine: (bridge-421834) defining domain...
	I0926 23:53:03.944054   66389 main.go:141] libmachine: (bridge-421834) defining domain using XML: 
	I0926 23:53:03.944122   66389 main.go:141] libmachine: (bridge-421834) <domain type='kvm'>
	I0926 23:53:03.944136   66389 main.go:141] libmachine: (bridge-421834)   <name>bridge-421834</name>
	I0926 23:53:03.944153   66389 main.go:141] libmachine: (bridge-421834)   <memory unit='MiB'>3072</memory>
	I0926 23:53:03.944163   66389 main.go:141] libmachine: (bridge-421834)   <vcpu>2</vcpu>
	I0926 23:53:03.944172   66389 main.go:141] libmachine: (bridge-421834)   <features>
	I0926 23:53:03.944184   66389 main.go:141] libmachine: (bridge-421834)     <acpi/>
	I0926 23:53:03.944191   66389 main.go:141] libmachine: (bridge-421834)     <apic/>
	I0926 23:53:03.944202   66389 main.go:141] libmachine: (bridge-421834)     <pae/>
	I0926 23:53:03.944209   66389 main.go:141] libmachine: (bridge-421834)   </features>
	I0926 23:53:03.944221   66389 main.go:141] libmachine: (bridge-421834)   <cpu mode='host-passthrough'>
	I0926 23:53:03.944230   66389 main.go:141] libmachine: (bridge-421834)   </cpu>
	I0926 23:53:03.944273   66389 main.go:141] libmachine: (bridge-421834)   <os>
	I0926 23:53:03.944306   66389 main.go:141] libmachine: (bridge-421834)     <type>hvm</type>
	I0926 23:53:03.944324   66389 main.go:141] libmachine: (bridge-421834)     <boot dev='cdrom'/>
	I0926 23:53:03.944333   66389 main.go:141] libmachine: (bridge-421834)     <boot dev='hd'/>
	I0926 23:53:03.944343   66389 main.go:141] libmachine: (bridge-421834)     <bootmenu enable='no'/>
	I0926 23:53:03.944351   66389 main.go:141] libmachine: (bridge-421834)   </os>
	I0926 23:53:03.944360   66389 main.go:141] libmachine: (bridge-421834)   <devices>
	I0926 23:53:03.944384   66389 main.go:141] libmachine: (bridge-421834)     <disk type='file' device='cdrom'>
	I0926 23:53:03.944403   66389 main.go:141] libmachine: (bridge-421834)       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/boot2docker.iso'/>
	I0926 23:53:03.944416   66389 main.go:141] libmachine: (bridge-421834)       <target dev='hdc' bus='scsi'/>
	I0926 23:53:03.944422   66389 main.go:141] libmachine: (bridge-421834)       <readonly/>
	I0926 23:53:03.944433   66389 main.go:141] libmachine: (bridge-421834)     </disk>
	I0926 23:53:03.944442   66389 main.go:141] libmachine: (bridge-421834)     <disk type='file' device='disk'>
	I0926 23:53:03.944455   66389 main.go:141] libmachine: (bridge-421834)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0926 23:53:03.944470   66389 main.go:141] libmachine: (bridge-421834)       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/bridge-421834.rawdisk'/>
	I0926 23:53:03.944478   66389 main.go:141] libmachine: (bridge-421834)       <target dev='hda' bus='virtio'/>
	I0926 23:53:03.944490   66389 main.go:141] libmachine: (bridge-421834)     </disk>
	I0926 23:53:03.944497   66389 main.go:141] libmachine: (bridge-421834)     <interface type='network'>
	I0926 23:53:03.944511   66389 main.go:141] libmachine: (bridge-421834)       <source network='mk-bridge-421834'/>
	I0926 23:53:03.944521   66389 main.go:141] libmachine: (bridge-421834)       <model type='virtio'/>
	I0926 23:53:03.944531   66389 main.go:141] libmachine: (bridge-421834)     </interface>
	I0926 23:53:03.944541   66389 main.go:141] libmachine: (bridge-421834)     <interface type='network'>
	I0926 23:53:03.944555   66389 main.go:141] libmachine: (bridge-421834)       <source network='default'/>
	I0926 23:53:03.944575   66389 main.go:141] libmachine: (bridge-421834)       <model type='virtio'/>
	I0926 23:53:03.944584   66389 main.go:141] libmachine: (bridge-421834)     </interface>
	I0926 23:53:03.944590   66389 main.go:141] libmachine: (bridge-421834)     <serial type='pty'>
	I0926 23:53:03.944598   66389 main.go:141] libmachine: (bridge-421834)       <target port='0'/>
	I0926 23:53:03.944604   66389 main.go:141] libmachine: (bridge-421834)     </serial>
	I0926 23:53:03.944621   66389 main.go:141] libmachine: (bridge-421834)     <console type='pty'>
	I0926 23:53:03.944627   66389 main.go:141] libmachine: (bridge-421834)       <target type='serial' port='0'/>
	I0926 23:53:03.944645   66389 main.go:141] libmachine: (bridge-421834)     </console>
	I0926 23:53:03.944652   66389 main.go:141] libmachine: (bridge-421834)     <rng model='virtio'>
	I0926 23:53:03.944677   66389 main.go:141] libmachine: (bridge-421834)       <backend model='random'>/dev/random</backend>
	I0926 23:53:03.944690   66389 main.go:141] libmachine: (bridge-421834)     </rng>
	I0926 23:53:03.944698   66389 main.go:141] libmachine: (bridge-421834)   </devices>
	I0926 23:53:03.944704   66389 main.go:141] libmachine: (bridge-421834) </domain>
	I0926 23:53:03.944712   66389 main.go:141] libmachine: (bridge-421834) 
	I0926 23:53:03.950476   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:42:09:77 in network default
	I0926 23:53:03.951187   66389 main.go:141] libmachine: (bridge-421834) starting domain...
	I0926 23:53:03.951211   66389 main.go:141] libmachine: (bridge-421834) ensuring networks are active...
	I0926 23:53:03.951222   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:03.952135   66389 main.go:141] libmachine: (bridge-421834) Ensuring network default is active
	I0926 23:53:03.952553   66389 main.go:141] libmachine: (bridge-421834) Ensuring network mk-bridge-421834 is active
	I0926 23:53:03.953289   66389 main.go:141] libmachine: (bridge-421834) getting domain XML...
	I0926 23:53:03.954482   66389 main.go:141] libmachine: (bridge-421834) DBG | starting domain XML:
	I0926 23:53:03.954505   66389 main.go:141] libmachine: (bridge-421834) DBG | <domain type='kvm'>
	I0926 23:53:03.954516   66389 main.go:141] libmachine: (bridge-421834) DBG |   <name>bridge-421834</name>
	I0926 23:53:03.954524   66389 main.go:141] libmachine: (bridge-421834) DBG |   <uuid>8d3b42b7-f84e-4eb2-ada2-e26070399929</uuid>
	I0926 23:53:03.954541   66389 main.go:141] libmachine: (bridge-421834) DBG |   <memory unit='KiB'>3145728</memory>
	I0926 23:53:03.954550   66389 main.go:141] libmachine: (bridge-421834) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I0926 23:53:03.954562   66389 main.go:141] libmachine: (bridge-421834) DBG |   <vcpu placement='static'>2</vcpu>
	I0926 23:53:03.954586   66389 main.go:141] libmachine: (bridge-421834) DBG |   <os>
	I0926 23:53:03.954619   66389 main.go:141] libmachine: (bridge-421834) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0926 23:53:03.954670   66389 main.go:141] libmachine: (bridge-421834) DBG |     <boot dev='cdrom'/>
	I0926 23:53:03.954686   66389 main.go:141] libmachine: (bridge-421834) DBG |     <boot dev='hd'/>
	I0926 23:53:03.954701   66389 main.go:141] libmachine: (bridge-421834) DBG |     <bootmenu enable='no'/>
	I0926 23:53:03.954713   66389 main.go:141] libmachine: (bridge-421834) DBG |   </os>
	I0926 23:53:03.954723   66389 main.go:141] libmachine: (bridge-421834) DBG |   <features>
	I0926 23:53:03.954731   66389 main.go:141] libmachine: (bridge-421834) DBG |     <acpi/>
	I0926 23:53:03.954740   66389 main.go:141] libmachine: (bridge-421834) DBG |     <apic/>
	I0926 23:53:03.954757   66389 main.go:141] libmachine: (bridge-421834) DBG |     <pae/>
	I0926 23:53:03.954773   66389 main.go:141] libmachine: (bridge-421834) DBG |   </features>
	I0926 23:53:03.954783   66389 main.go:141] libmachine: (bridge-421834) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0926 23:53:03.954794   66389 main.go:141] libmachine: (bridge-421834) DBG |   <clock offset='utc'/>
	I0926 23:53:03.954809   66389 main.go:141] libmachine: (bridge-421834) DBG |   <on_poweroff>destroy</on_poweroff>
	I0926 23:53:03.954819   66389 main.go:141] libmachine: (bridge-421834) DBG |   <on_reboot>restart</on_reboot>
	I0926 23:53:03.954840   66389 main.go:141] libmachine: (bridge-421834) DBG |   <on_crash>destroy</on_crash>
	I0926 23:53:03.954848   66389 main.go:141] libmachine: (bridge-421834) DBG |   <devices>
	I0926 23:53:03.954861   66389 main.go:141] libmachine: (bridge-421834) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0926 23:53:03.954874   66389 main.go:141] libmachine: (bridge-421834) DBG |     <disk type='file' device='cdrom'>
	I0926 23:53:03.954900   66389 main.go:141] libmachine: (bridge-421834) DBG |       <driver name='qemu' type='raw'/>
	I0926 23:53:03.954942   66389 main.go:141] libmachine: (bridge-421834) DBG |       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/boot2docker.iso'/>
	I0926 23:53:03.954955   66389 main.go:141] libmachine: (bridge-421834) DBG |       <target dev='hdc' bus='scsi'/>
	I0926 23:53:03.954962   66389 main.go:141] libmachine: (bridge-421834) DBG |       <readonly/>
	I0926 23:53:03.954976   66389 main.go:141] libmachine: (bridge-421834) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0926 23:53:03.954986   66389 main.go:141] libmachine: (bridge-421834) DBG |     </disk>
	I0926 23:53:03.954995   66389 main.go:141] libmachine: (bridge-421834) DBG |     <disk type='file' device='disk'>
	I0926 23:53:03.955007   66389 main.go:141] libmachine: (bridge-421834) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0926 23:53:03.955027   66389 main.go:141] libmachine: (bridge-421834) DBG |       <source file='/home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/bridge-421834.rawdisk'/>
	I0926 23:53:03.955043   66389 main.go:141] libmachine: (bridge-421834) DBG |       <target dev='hda' bus='virtio'/>
	I0926 23:53:03.955064   66389 main.go:141] libmachine: (bridge-421834) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0926 23:53:03.955092   66389 main.go:141] libmachine: (bridge-421834) DBG |     </disk>
	I0926 23:53:03.955107   66389 main.go:141] libmachine: (bridge-421834) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0926 23:53:03.955120   66389 main.go:141] libmachine: (bridge-421834) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0926 23:53:03.955132   66389 main.go:141] libmachine: (bridge-421834) DBG |     </controller>
	I0926 23:53:03.955144   66389 main.go:141] libmachine: (bridge-421834) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0926 23:53:03.955156   66389 main.go:141] libmachine: (bridge-421834) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0926 23:53:03.955165   66389 main.go:141] libmachine: (bridge-421834) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0926 23:53:03.955176   66389 main.go:141] libmachine: (bridge-421834) DBG |     </controller>
	I0926 23:53:03.955183   66389 main.go:141] libmachine: (bridge-421834) DBG |     <interface type='network'>
	I0926 23:53:03.955196   66389 main.go:141] libmachine: (bridge-421834) DBG |       <mac address='52:54:00:35:cf:e4'/>
	I0926 23:53:03.955211   66389 main.go:141] libmachine: (bridge-421834) DBG |       <source network='mk-bridge-421834'/>
	I0926 23:53:03.955222   66389 main.go:141] libmachine: (bridge-421834) DBG |       <model type='virtio'/>
	I0926 23:53:03.955234   66389 main.go:141] libmachine: (bridge-421834) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0926 23:53:03.955245   66389 main.go:141] libmachine: (bridge-421834) DBG |     </interface>
	I0926 23:53:03.955252   66389 main.go:141] libmachine: (bridge-421834) DBG |     <interface type='network'>
	I0926 23:53:03.955261   66389 main.go:141] libmachine: (bridge-421834) DBG |       <mac address='52:54:00:42:09:77'/>
	I0926 23:53:03.955269   66389 main.go:141] libmachine: (bridge-421834) DBG |       <source network='default'/>
	I0926 23:53:03.955281   66389 main.go:141] libmachine: (bridge-421834) DBG |       <model type='virtio'/>
	I0926 23:53:03.955294   66389 main.go:141] libmachine: (bridge-421834) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0926 23:53:03.955317   66389 main.go:141] libmachine: (bridge-421834) DBG |     </interface>
	I0926 23:53:03.955327   66389 main.go:141] libmachine: (bridge-421834) DBG |     <serial type='pty'>
	I0926 23:53:03.955337   66389 main.go:141] libmachine: (bridge-421834) DBG |       <target type='isa-serial' port='0'>
	I0926 23:53:03.955363   66389 main.go:141] libmachine: (bridge-421834) DBG |         <model name='isa-serial'/>
	I0926 23:53:03.955379   66389 main.go:141] libmachine: (bridge-421834) DBG |       </target>
	I0926 23:53:03.955387   66389 main.go:141] libmachine: (bridge-421834) DBG |     </serial>
	I0926 23:53:03.955402   66389 main.go:141] libmachine: (bridge-421834) DBG |     <console type='pty'>
	I0926 23:53:03.955412   66389 main.go:141] libmachine: (bridge-421834) DBG |       <target type='serial' port='0'/>
	I0926 23:53:03.955421   66389 main.go:141] libmachine: (bridge-421834) DBG |     </console>
	I0926 23:53:03.955430   66389 main.go:141] libmachine: (bridge-421834) DBG |     <input type='mouse' bus='ps2'/>
	I0926 23:53:03.955440   66389 main.go:141] libmachine: (bridge-421834) DBG |     <input type='keyboard' bus='ps2'/>
	I0926 23:53:03.955448   66389 main.go:141] libmachine: (bridge-421834) DBG |     <audio id='1' type='none'/>
	I0926 23:53:03.955459   66389 main.go:141] libmachine: (bridge-421834) DBG |     <memballoon model='virtio'>
	I0926 23:53:03.955482   66389 main.go:141] libmachine: (bridge-421834) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0926 23:53:03.955498   66389 main.go:141] libmachine: (bridge-421834) DBG |     </memballoon>
	I0926 23:53:03.955514   66389 main.go:141] libmachine: (bridge-421834) DBG |     <rng model='virtio'>
	I0926 23:53:03.955525   66389 main.go:141] libmachine: (bridge-421834) DBG |       <backend model='random'>/dev/random</backend>
	I0926 23:53:03.955539   66389 main.go:141] libmachine: (bridge-421834) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0926 23:53:03.955561   66389 main.go:141] libmachine: (bridge-421834) DBG |     </rng>
	I0926 23:53:03.955581   66389 main.go:141] libmachine: (bridge-421834) DBG |   </devices>
	I0926 23:53:03.955599   66389 main.go:141] libmachine: (bridge-421834) DBG | </domain>
	I0926 23:53:03.955609   66389 main.go:141] libmachine: (bridge-421834) DBG | 
	I0926 23:53:05.413564   66389 main.go:141] libmachine: (bridge-421834) waiting for domain to start...
	I0926 23:53:05.415112   66389 main.go:141] libmachine: (bridge-421834) domain is now running
	I0926 23:53:05.415138   66389 main.go:141] libmachine: (bridge-421834) waiting for IP...
	I0926 23:53:05.416102   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:05.416795   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:05.416811   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:05.417206   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:05.417274   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:05.417222   66416 retry.go:31] will retry after 242.746698ms: waiting for domain to come up
	I0926 23:53:05.661796   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:05.662661   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:05.662693   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:05.663056   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:05.663085   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:05.663033   66416 retry.go:31] will retry after 310.046377ms: waiting for domain to come up
	I0926 23:53:05.974985   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:05.975952   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:05.975977   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:05.976452   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:05.976480   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:05.976421   66416 retry.go:31] will retry after 380.53988ms: waiting for domain to come up
	I0926 23:53:06.359242   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:06.359992   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:06.360015   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:06.360452   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:06.360483   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:06.360439   66416 retry.go:31] will retry after 379.942424ms: waiting for domain to come up
	I0926 23:53:06.742493   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:06.743323   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:06.743354   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:06.743877   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:06.743937   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:06.743882   66416 retry.go:31] will retry after 473.943109ms: waiting for domain to come up
	I0926 23:53:07.219641   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:07.220455   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:07.220483   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:07.220879   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:07.220907   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:07.220806   66416 retry.go:31] will retry after 830.680185ms: waiting for domain to come up
	I0926 23:53:08.053128   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:08.053889   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:08.053917   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:08.054379   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:08.054406   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:08.054318   66416 retry.go:31] will retry after 1.082514621s: waiting for domain to come up
	I0926 23:53:09.910591   64230 kubeadm.go:318] [init] Using Kubernetes version: v1.34.0
	I0926 23:53:09.910672   64230 kubeadm.go:318] [preflight] Running pre-flight checks
	I0926 23:53:09.910770   64230 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 23:53:09.910917   64230 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 23:53:09.911047   64230 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0926 23:53:09.911161   64230 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 23:53:10.016888   64230 out.go:252]   - Generating certificates and keys ...
	I0926 23:53:10.017026   64230 kubeadm.go:318] [certs] Using existing ca certificate authority
	I0926 23:53:10.017121   64230 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I0926 23:53:10.017222   64230 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0926 23:53:10.017304   64230 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I0926 23:53:10.017390   64230 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I0926 23:53:10.017461   64230 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I0926 23:53:10.017545   64230 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I0926 23:53:10.017756   64230 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [flannel-421834 localhost] and IPs [192.168.50.130 127.0.0.1 ::1]
	I0926 23:53:10.017878   64230 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I0926 23:53:10.018056   64230 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [flannel-421834 localhost] and IPs [192.168.50.130 127.0.0.1 ::1]
	I0926 23:53:10.018164   64230 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0926 23:53:10.018257   64230 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I0926 23:53:10.018321   64230 kubeadm.go:318] [certs] Generating "sa" key and public key
	I0926 23:53:10.018413   64230 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 23:53:10.018491   64230 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 23:53:10.018576   64230 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 23:53:10.018685   64230 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 23:53:10.018787   64230 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 23:53:10.018889   64230 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 23:53:10.019010   64230 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 23:53:10.019109   64230 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 23:53:10.082116   64230 out.go:252]   - Booting up control plane ...
	I0926 23:53:10.082252   64230 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 23:53:10.082387   64230 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 23:53:10.082533   64230 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 23:53:10.082754   64230 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 23:53:10.082923   64230 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0926 23:53:10.083091   64230 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0926 23:53:10.083214   64230 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 23:53:10.083284   64230 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I0926 23:53:10.083469   64230 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0926 23:53:10.083630   64230 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0926 23:53:10.083723   64230 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001292707s
	I0926 23:53:10.083875   64230 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0926 23:53:10.083995   64230 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.50.130:8443/livez
	I0926 23:53:10.084115   64230 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0926 23:53:10.084210   64230 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0926 23:53:10.084320   64230 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.240324477s
	I0926 23:53:10.084423   64230 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.837365507s
	I0926 23:53:10.084487   64230 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.004320011s
	I0926 23:53:10.084614   64230 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 23:53:10.084807   64230 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 23:53:10.084919   64230 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 23:53:10.085160   64230 kubeadm.go:318] [mark-control-plane] Marking the node flannel-421834 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 23:53:10.085234   64230 kubeadm.go:318] [bootstrap-token] Using token: 4os0st.sh1dwg769x37x84s
	I0926 23:53:10.116755   64230 out.go:252]   - Configuring RBAC rules ...
	I0926 23:53:10.116963   64230 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 23:53:10.117080   64230 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 23:53:10.117276   64230 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 23:53:10.117478   64230 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 23:53:10.117640   64230 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 23:53:10.117762   64230 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 23:53:10.117956   64230 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 23:53:10.118030   64230 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I0926 23:53:10.118106   64230 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I0926 23:53:10.118116   64230 kubeadm.go:318] 
	I0926 23:53:10.118205   64230 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I0926 23:53:10.118214   64230 kubeadm.go:318] 
	I0926 23:53:10.118345   64230 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I0926 23:53:10.118360   64230 kubeadm.go:318] 
	I0926 23:53:10.118396   64230 kubeadm.go:318]   mkdir -p $HOME/.kube
	I0926 23:53:10.118489   64230 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 23:53:10.118561   64230 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 23:53:10.118570   64230 kubeadm.go:318] 
	I0926 23:53:10.118658   64230 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I0926 23:53:10.118666   64230 kubeadm.go:318] 
	I0926 23:53:10.118732   64230 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 23:53:10.118741   64230 kubeadm.go:318] 
	I0926 23:53:10.118844   64230 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I0926 23:53:10.118952   64230 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 23:53:10.119041   64230 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 23:53:10.119048   64230 kubeadm.go:318] 
	I0926 23:53:10.119242   64230 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 23:53:10.119372   64230 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I0926 23:53:10.119389   64230 kubeadm.go:318] 
	I0926 23:53:10.119523   64230 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 4os0st.sh1dwg769x37x84s \
	I0926 23:53:10.119693   64230 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1bc065dc0287f5108511f75d77232285046ef3d632aca3b6b4eb77abcecaa58 \
	I0926 23:53:10.119739   64230 kubeadm.go:318] 	--control-plane 
	I0926 23:53:10.119762   64230 kubeadm.go:318] 
	I0926 23:53:10.119919   64230 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I0926 23:53:10.119936   64230 kubeadm.go:318] 
	I0926 23:53:10.120053   64230 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 4os0st.sh1dwg769x37x84s \
	I0926 23:53:10.120167   64230 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1bc065dc0287f5108511f75d77232285046ef3d632aca3b6b4eb77abcecaa58 
	I0926 23:53:10.120181   64230 cni.go:84] Creating CNI manager for "flannel"
	I0926 23:53:10.178904   64230 out.go:179] * Configuring Flannel (Container Networking Interface) ...
	W0926 23:53:09.152106   62447 pod_ready.go:104] pod "coredns-66bc5c9577-b2hgd" is not "Ready", error: <nil>
	W0926 23:53:11.649252   62447 pod_ready.go:104] pod "coredns-66bc5c9577-b2hgd" is not "Ready", error: <nil>
	I0926 23:53:09.138210   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:09.138974   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:09.139002   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:09.139401   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:09.139457   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:09.139402   66416 retry.go:31] will retry after 1.24975676s: waiting for domain to come up
	I0926 23:53:10.391406   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:10.392295   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:10.392325   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:10.392771   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:10.392859   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:10.392767   66416 retry.go:31] will retry after 1.39046487s: waiting for domain to come up
	I0926 23:53:11.785124   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:11.785782   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:11.785805   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:11.786195   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:11.786220   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:11.786165   66416 retry.go:31] will retry after 1.841603756s: waiting for domain to come up
	I0926 23:53:10.225761   64230 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0926 23:53:10.238860   64230 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0926 23:53:10.238885   64230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4415 bytes)
	I0926 23:53:10.274150   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0926 23:53:10.803561   64230 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 23:53:10.803617   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:10.803735   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-421834 minikube.k8s.io/updated_at=2025_09_26T23_53_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47 minikube.k8s.io/name=flannel-421834 minikube.k8s.io/primary=true
	I0926 23:53:10.843210   64230 ops.go:34] apiserver oom_adj: -16
	I0926 23:53:10.944365   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:11.445111   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:11.944619   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:12.444654   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:12.945159   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:13.445131   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:13.944746   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:14.444939   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:13.650389   62447 pod_ready.go:94] pod "coredns-66bc5c9577-b2hgd" is "Ready"
	I0926 23:53:13.650411   62447 pod_ready.go:86] duration metric: took 36.508072379s for pod "coredns-66bc5c9577-b2hgd" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:13.650421   62447 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rwz5t" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:13.653233   62447 pod_ready.go:99] pod "coredns-66bc5c9577-rwz5t" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-rwz5t" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-rwz5t" not found
	I0926 23:53:13.653253   62447 pod_ready.go:86] duration metric: took 2.826491ms for pod "coredns-66bc5c9577-rwz5t" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:13.657377   62447 pod_ready.go:83] waiting for pod "etcd-enable-default-cni-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:13.664402   62447 pod_ready.go:94] pod "etcd-enable-default-cni-421834" is "Ready"
	I0926 23:53:13.664424   62447 pod_ready.go:86] duration metric: took 7.018923ms for pod "etcd-enable-default-cni-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:13.667333   62447 pod_ready.go:83] waiting for pod "kube-apiserver-enable-default-cni-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:13.675804   62447 pod_ready.go:94] pod "kube-apiserver-enable-default-cni-421834" is "Ready"
	I0926 23:53:13.675864   62447 pod_ready.go:86] duration metric: took 8.503966ms for pod "kube-apiserver-enable-default-cni-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:13.678769   62447 pod_ready.go:83] waiting for pod "kube-controller-manager-enable-default-cni-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:14.047842   62447 pod_ready.go:94] pod "kube-controller-manager-enable-default-cni-421834" is "Ready"
	I0926 23:53:14.047875   62447 pod_ready.go:86] duration metric: took 369.075451ms for pod "kube-controller-manager-enable-default-cni-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:14.247491   62447 pod_ready.go:83] waiting for pod "kube-proxy-qkshr" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:14.646615   62447 pod_ready.go:94] pod "kube-proxy-qkshr" is "Ready"
	I0926 23:53:14.646653   62447 pod_ready.go:86] duration metric: took 399.110961ms for pod "kube-proxy-qkshr" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:14.847851   62447 pod_ready.go:83] waiting for pod "kube-scheduler-enable-default-cni-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:15.248070   62447 pod_ready.go:94] pod "kube-scheduler-enable-default-cni-421834" is "Ready"
	I0926 23:53:15.248112   62447 pod_ready.go:86] duration metric: took 400.223954ms for pod "kube-scheduler-enable-default-cni-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:15.248128   62447 pod_ready.go:40] duration metric: took 38.116640819s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:53:15.309709   62447 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 23:53:15.312939   62447 out.go:179] * Done! kubectl is now configured to use "enable-default-cni-421834" cluster and "default" namespace by default
	I0926 23:53:14.944969   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:15.445270   64230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:15.713410   64230 kubeadm.go:1113] duration metric: took 4.909853439s to wait for elevateKubeSystemPrivileges
	I0926 23:53:15.713471   64230 kubeadm.go:402] duration metric: took 20.090602703s to StartCluster
	I0926 23:53:15.713496   64230 settings.go:142] acquiring lock: {Name:mk8a46d5a99d51096f5a73696c8b5f570ce357f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:15.713612   64230 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 23:53:15.716100   64230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/kubeconfig: {Name:mkc92bf76d8ba21d0a2b0bb28107401b61549063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:15.716437   64230 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.130 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 23:53:15.716586   64230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0926 23:53:15.716880   64230 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 23:53:15.716975   64230 addons.go:69] Setting storage-provisioner=true in profile "flannel-421834"
	I0926 23:53:15.717000   64230 addons.go:238] Setting addon storage-provisioner=true in "flannel-421834"
	I0926 23:53:15.717033   64230 host.go:66] Checking if "flannel-421834" exists ...
	I0926 23:53:15.717209   64230 addons.go:69] Setting default-storageclass=true in profile "flannel-421834"
	I0926 23:53:15.717233   64230 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-421834"
	I0926 23:53:15.717547   64230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:15.717593   64230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:15.717625   64230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:15.717668   64230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:15.717748   64230 config.go:182] Loaded profile config "flannel-421834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:53:15.720060   64230 out.go:179] * Verifying Kubernetes components...
	I0926 23:53:15.722845   64230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:53:15.737104   64230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44729
	I0926 23:53:15.737620   64230 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:15.738112   64230 main.go:141] libmachine: Using API Version  1
	I0926 23:53:15.738134   64230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:15.738208   64230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41387
	I0926 23:53:15.738753   64230 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:15.738875   64230 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:15.739288   64230 main.go:141] libmachine: (flannel-421834) Calling .GetState
	I0926 23:53:15.739292   64230 main.go:141] libmachine: Using API Version  1
	I0926 23:53:15.739358   64230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:15.739754   64230 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:15.740571   64230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:15.740607   64230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:15.744688   64230 addons.go:238] Setting addon default-storageclass=true in "flannel-421834"
	I0926 23:53:15.744734   64230 host.go:66] Checking if "flannel-421834" exists ...
	I0926 23:53:15.745116   64230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:15.745169   64230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:15.761606   64230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43365
	I0926 23:53:15.763301   64230 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:15.765135   64230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38581
	I0926 23:53:15.765224   64230 main.go:141] libmachine: Using API Version  1
	I0926 23:53:15.765414   64230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:15.767119   64230 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:15.767275   64230 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:15.767946   64230 main.go:141] libmachine: Using API Version  1
	I0926 23:53:15.768025   64230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:15.768424   64230 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:15.769199   64230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:15.769273   64230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:15.769897   64230 main.go:141] libmachine: (flannel-421834) Calling .GetState
	I0926 23:53:15.774783   64230 main.go:141] libmachine: (flannel-421834) Calling .DriverName
	I0926 23:53:15.776927   64230 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 23:53:13.630039   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:13.630694   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:13.630730   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:13.631138   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:13.631162   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:13.631106   66416 retry.go:31] will retry after 2.294192316s: waiting for domain to come up
	I0926 23:53:15.929303   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:15.930494   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:15.930792   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:15.931369   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:15.931586   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:15.931507   66416 retry.go:31] will retry after 3.412894975s: waiting for domain to come up
	I0926 23:53:15.779940   64230 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:53:15.779963   64230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 23:53:15.779989   64230 main.go:141] libmachine: (flannel-421834) Calling .GetSSHHostname
	I0926 23:53:15.785944   64230 main.go:141] libmachine: (flannel-421834) DBG | domain flannel-421834 has defined MAC address 52:54:00:bc:65:4f in network mk-flannel-421834
	I0926 23:53:15.786725   64230 main.go:141] libmachine: (flannel-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:65:4f", ip: ""} in network mk-flannel-421834: {Iface:virbr2 ExpiryTime:2025-09-27 00:52:43 +0000 UTC Type:0 Mac:52:54:00:bc:65:4f Iaid: IPaddr:192.168.50.130 Prefix:24 Hostname:flannel-421834 Clientid:01:52:54:00:bc:65:4f}
	I0926 23:53:15.787141   64230 main.go:141] libmachine: (flannel-421834) DBG | domain flannel-421834 has defined IP address 192.168.50.130 and MAC address 52:54:00:bc:65:4f in network mk-flannel-421834
	I0926 23:53:15.787868   64230 main.go:141] libmachine: (flannel-421834) Calling .GetSSHPort
	I0926 23:53:15.788630   64230 main.go:141] libmachine: (flannel-421834) Calling .GetSSHKeyPath
	I0926 23:53:15.789031   64230 main.go:141] libmachine: (flannel-421834) Calling .GetSSHUsername
	I0926 23:53:15.789388   64230 sshutil.go:53] new ssh client: &{IP:192.168.50.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/flannel-421834/id_rsa Username:docker}
	I0926 23:53:15.792087   64230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42235
	I0926 23:53:15.792955   64230 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:15.793812   64230 main.go:141] libmachine: Using API Version  1
	I0926 23:53:15.793870   64230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:15.794398   64230 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:15.794684   64230 main.go:141] libmachine: (flannel-421834) Calling .GetState
	I0926 23:53:15.798132   64230 main.go:141] libmachine: (flannel-421834) Calling .DriverName
	I0926 23:53:15.798474   64230 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 23:53:15.798509   64230 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 23:53:15.798541   64230 main.go:141] libmachine: (flannel-421834) Calling .GetSSHHostname
	I0926 23:53:15.804045   64230 main.go:141] libmachine: (flannel-421834) DBG | domain flannel-421834 has defined MAC address 52:54:00:bc:65:4f in network mk-flannel-421834
	I0926 23:53:15.804641   64230 main.go:141] libmachine: (flannel-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:65:4f", ip: ""} in network mk-flannel-421834: {Iface:virbr2 ExpiryTime:2025-09-27 00:52:43 +0000 UTC Type:0 Mac:52:54:00:bc:65:4f Iaid: IPaddr:192.168.50.130 Prefix:24 Hostname:flannel-421834 Clientid:01:52:54:00:bc:65:4f}
	I0926 23:53:15.804720   64230 main.go:141] libmachine: (flannel-421834) DBG | domain flannel-421834 has defined IP address 192.168.50.130 and MAC address 52:54:00:bc:65:4f in network mk-flannel-421834
	I0926 23:53:15.804912   64230 main.go:141] libmachine: (flannel-421834) Calling .GetSSHPort
	I0926 23:53:15.805176   64230 main.go:141] libmachine: (flannel-421834) Calling .GetSSHKeyPath
	I0926 23:53:15.805405   64230 main.go:141] libmachine: (flannel-421834) Calling .GetSSHUsername
	I0926 23:53:15.805620   64230 sshutil.go:53] new ssh client: &{IP:192.168.50.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/flannel-421834/id_rsa Username:docker}
	I0926 23:53:16.209327   64230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0926 23:53:16.209525   64230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:53:16.355311   64230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 23:53:16.482372   64230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:53:17.533563   64230 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.323987139s)
	I0926 23:53:17.533595   64230 start.go:976] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0926 23:53:17.535328   64230 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.325641904s)
	I0926 23:53:17.535910   64230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.180559917s)
	I0926 23:53:17.535948   64230 main.go:141] libmachine: Making call to close driver server
	I0926 23:53:17.535959   64230 main.go:141] libmachine: (flannel-421834) Calling .Close
	I0926 23:53:17.536330   64230 main.go:141] libmachine: (flannel-421834) DBG | Closing plugin on server side
	I0926 23:53:17.536368   64230 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:53:17.536375   64230 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:53:17.536384   64230 main.go:141] libmachine: Making call to close driver server
	I0926 23:53:17.536390   64230 main.go:141] libmachine: (flannel-421834) Calling .Close
	I0926 23:53:17.536981   64230 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:53:17.537258   64230 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:53:17.537205   64230 main.go:141] libmachine: (flannel-421834) DBG | Closing plugin on server side
	I0926 23:53:17.540773   64230 node_ready.go:35] waiting up to 15m0s for node "flannel-421834" to be "Ready" ...
	I0926 23:53:17.596474   64230 main.go:141] libmachine: Making call to close driver server
	I0926 23:53:17.596502   64230 main.go:141] libmachine: (flannel-421834) Calling .Close
	I0926 23:53:17.596784   64230 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:53:17.596804   64230 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:53:18.046233   64230 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-421834" context rescaled to 1 replicas
	I0926 23:53:18.306402   64230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.823983228s)
	I0926 23:53:18.306465   64230 main.go:141] libmachine: Making call to close driver server
	I0926 23:53:18.306476   64230 main.go:141] libmachine: (flannel-421834) Calling .Close
	I0926 23:53:18.307068   64230 main.go:141] libmachine: (flannel-421834) DBG | Closing plugin on server side
	I0926 23:53:18.307118   64230 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:53:18.307126   64230 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:53:18.307134   64230 main.go:141] libmachine: Making call to close driver server
	I0926 23:53:18.307143   64230 main.go:141] libmachine: (flannel-421834) Calling .Close
	I0926 23:53:18.307561   64230 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:53:18.307578   64230 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:53:18.307594   64230 main.go:141] libmachine: (flannel-421834) DBG | Closing plugin on server side
	I0926 23:53:18.311866   64230 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0926 23:53:18.313253   64230 addons.go:514] duration metric: took 2.596384019s for enable addons: enabled=[default-storageclass storage-provisioner]
	W0926 23:53:19.548935   64230 node_ready.go:57] node "flannel-421834" has "Ready":"False" status (will retry)
	I0926 23:53:19.346311   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:19.347330   66389 main.go:141] libmachine: (bridge-421834) DBG | no network interface addresses found for domain bridge-421834 (source=lease)
	I0926 23:53:19.347440   66389 main.go:141] libmachine: (bridge-421834) DBG | trying to list again with source=arp
	I0926 23:53:19.348173   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find current IP address of domain bridge-421834 in network mk-bridge-421834 (interfaces detected: [])
	I0926 23:53:19.348417   66389 main.go:141] libmachine: (bridge-421834) DBG | I0926 23:53:19.348233   66416 retry.go:31] will retry after 3.007983737s: waiting for domain to come up
	I0926 23:53:22.360710   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:22.361659   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has current primary IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:22.361687   66389 main.go:141] libmachine: (bridge-421834) found domain IP: 192.168.61.22
	I0926 23:53:22.361701   66389 main.go:141] libmachine: (bridge-421834) reserving static IP address...
	I0926 23:53:22.362248   66389 main.go:141] libmachine: (bridge-421834) DBG | unable to find host DHCP lease matching {name: "bridge-421834", mac: "52:54:00:35:cf:e4", ip: "192.168.61.22"} in network mk-bridge-421834
	I0926 23:53:22.590911   66389 main.go:141] libmachine: (bridge-421834) reserved static IP address 192.168.61.22 for domain bridge-421834
	I0926 23:53:22.590941   66389 main.go:141] libmachine: (bridge-421834) DBG | Getting to WaitForSSH function...
	I0926 23:53:22.590952   66389 main.go:141] libmachine: (bridge-421834) waiting for SSH...
	I0926 23:53:22.594463   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:22.594998   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:minikube Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:22.595025   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:22.595207   66389 main.go:141] libmachine: (bridge-421834) DBG | Using SSH client type: external
	I0926 23:53:22.595229   66389 main.go:141] libmachine: (bridge-421834) DBG | Using SSH private key: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/id_rsa (-rw-------)
	I0926 23:53:22.595272   66389 main.go:141] libmachine: (bridge-421834) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0926 23:53:22.595285   66389 main.go:141] libmachine: (bridge-421834) DBG | About to run SSH command:
	I0926 23:53:22.595300   66389 main.go:141] libmachine: (bridge-421834) DBG | exit 0
	I0926 23:53:22.728475   66389 main.go:141] libmachine: (bridge-421834) DBG | SSH cmd err, output: <nil>: 
	I0926 23:53:22.728881   66389 main.go:141] libmachine: (bridge-421834) domain creation complete
	I0926 23:53:22.729358   66389 main.go:141] libmachine: (bridge-421834) Calling .GetConfigRaw
	I0926 23:53:22.730073   66389 main.go:141] libmachine: (bridge-421834) Calling .DriverName
	I0926 23:53:22.730313   66389 main.go:141] libmachine: (bridge-421834) Calling .DriverName
	I0926 23:53:22.730511   66389 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0926 23:53:22.730541   66389 main.go:141] libmachine: (bridge-421834) Calling .GetState
	I0926 23:53:22.731978   66389 main.go:141] libmachine: Detecting operating system of created instance...
	I0926 23:53:22.731994   66389 main.go:141] libmachine: Waiting for SSH to be available...
	I0926 23:53:22.732002   66389 main.go:141] libmachine: Getting to WaitForSSH function...
	I0926 23:53:22.732008   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:22.735233   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:22.735763   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:22.735796   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:22.736064   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:22.736260   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:22.736421   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:22.736554   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:22.736728   66389 main.go:141] libmachine: Using SSH client type: native
	I0926 23:53:22.737005   66389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.22 22 <nil> <nil>}
	I0926 23:53:22.737022   66389 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0926 23:53:22.846038   66389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 23:53:22.846083   66389 main.go:141] libmachine: Detecting the provisioner...
	I0926 23:53:22.846095   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:22.850385   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:22.850884   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:22.850919   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:22.851234   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:22.851469   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:22.851697   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:22.851893   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:22.852114   66389 main.go:141] libmachine: Using SSH client type: native
	I0926 23:53:22.852417   66389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.22 22 <nil> <nil>}
	I0926 23:53:22.852432   66389 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0926 23:53:22.966516   66389 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0926 23:53:22.966639   66389 main.go:141] libmachine: found compatible host: buildroot
	I0926 23:53:22.966657   66389 main.go:141] libmachine: Provisioning with buildroot...
	I0926 23:53:22.966668   66389 main.go:141] libmachine: (bridge-421834) Calling .GetMachineName
	I0926 23:53:22.966970   66389 buildroot.go:166] provisioning hostname "bridge-421834"
	I0926 23:53:22.966999   66389 main.go:141] libmachine: (bridge-421834) Calling .GetMachineName
	I0926 23:53:22.967205   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:22.970717   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:22.971216   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:22.971246   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:22.971437   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:22.971670   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:22.971893   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:22.972083   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:22.972266   66389 main.go:141] libmachine: Using SSH client type: native
	I0926 23:53:22.972582   66389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.22 22 <nil> <nil>}
	I0926 23:53:22.972602   66389 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-421834 && echo "bridge-421834" | sudo tee /etc/hostname
	I0926 23:53:23.104989   66389 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-421834
	
	I0926 23:53:23.105021   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:23.108787   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:23.109198   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:23.109230   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:23.109436   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:23.109665   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:23.109883   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:23.110062   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:23.110281   66389 main.go:141] libmachine: Using SSH client type: native
	I0926 23:53:23.110587   66389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.22 22 <nil> <nil>}
	I0926 23:53:23.110609   66389 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-421834' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-421834/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-421834' | sudo tee -a /etc/hosts; 
				fi
			fi
	W0926 23:53:22.045629   64230 node_ready.go:57] node "flannel-421834" has "Ready":"False" status (will retry)
	I0926 23:53:22.545184   64230 node_ready.go:49] node "flannel-421834" is "Ready"
	I0926 23:53:22.545213   64230 node_ready.go:38] duration metric: took 5.004290153s for node "flannel-421834" to be "Ready" ...
	I0926 23:53:22.545227   64230 api_server.go:52] waiting for apiserver process to appear ...
	I0926 23:53:22.545288   64230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:53:22.573269   64230 api_server.go:72] duration metric: took 6.856787423s to wait for apiserver process to appear ...
	I0926 23:53:22.573298   64230 api_server.go:88] waiting for apiserver healthz status ...
	I0926 23:53:22.573313   64230 api_server.go:253] Checking apiserver healthz at https://192.168.50.130:8443/healthz ...
	I0926 23:53:22.578813   64230 api_server.go:279] https://192.168.50.130:8443/healthz returned 200:
	ok
	I0926 23:53:22.580600   64230 api_server.go:141] control plane version: v1.34.0
	I0926 23:53:22.580639   64230 api_server.go:131] duration metric: took 7.325266ms to wait for apiserver health ...
	I0926 23:53:22.580650   64230 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 23:53:22.585382   64230 system_pods.go:59] 7 kube-system pods found
	I0926 23:53:22.585426   64230 system_pods.go:61] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:22.585434   64230 system_pods.go:61] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:53:22.585443   64230 system_pods.go:61] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:22.585449   64230 system_pods.go:61] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:22.585455   64230 system_pods.go:61] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:22.585459   64230 system_pods.go:61] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:53:22.585469   64230 system_pods.go:61] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:53:22.585476   64230 system_pods.go:74] duration metric: took 4.819502ms to wait for pod list to return data ...
	I0926 23:53:22.585486   64230 default_sa.go:34] waiting for default service account to be created ...
	I0926 23:53:22.588921   64230 default_sa.go:45] found service account: "default"
	I0926 23:53:22.588950   64230 default_sa.go:55] duration metric: took 3.45642ms for default service account to be created ...
	I0926 23:53:22.588960   64230 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 23:53:22.599173   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:22.599211   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:22.599232   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:53:22.599242   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:22.599250   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:22.599256   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:22.599266   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:53:22.599278   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:53:22.599331   64230 retry.go:31] will retry after 266.786551ms: missing components: kube-dns
	I0926 23:53:22.898319   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:22.898355   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:22.898361   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:22.898372   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:22.898377   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:22.898382   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:22.898418   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:53:22.898435   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:53:22.898459   64230 retry.go:31] will retry after 370.047017ms: missing components: kube-dns
	I0926 23:53:23.284233   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:23.284283   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:23.284294   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:23.284304   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:23.284321   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:23.284328   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:23.284333   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:23.284342   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:53:23.284366   64230 retry.go:31] will retry after 338.61988ms: missing components: kube-dns
	I0926 23:53:23.643216   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:23.643261   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:23.643270   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:23.643280   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:23.643285   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:23.643291   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:23.643295   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:23.643302   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:53:23.643321   64230 retry.go:31] will retry after 399.819321ms: missing components: kube-dns
	I0926 23:53:24.049673   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:24.049706   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:24.049712   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:24.049719   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:24.049722   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:24.049731   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:24.049735   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:24.049740   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Running
	I0926 23:53:24.049754   64230 retry.go:31] will retry after 558.110871ms: missing components: kube-dns
	I0926 23:53:23.235791   66389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 23:53:23.235846   66389 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21642-6020/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-6020/.minikube}
	I0926 23:53:23.235913   66389 buildroot.go:174] setting up certificates
	I0926 23:53:23.235928   66389 provision.go:84] configureAuth start
	I0926 23:53:23.235947   66389 main.go:141] libmachine: (bridge-421834) Calling .GetMachineName
	I0926 23:53:23.236275   66389 main.go:141] libmachine: (bridge-421834) Calling .GetIP
	I0926 23:53:23.239811   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:23.240273   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:23.240310   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:23.240505   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:23.243538   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:23.244109   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:23.244141   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:23.244422   66389 provision.go:143] copyHostCerts
	I0926 23:53:23.244482   66389 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem, removing ...
	I0926 23:53:23.244505   66389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem
	I0926 23:53:23.244595   66389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/ca.pem (1082 bytes)
	I0926 23:53:23.244725   66389 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem, removing ...
	I0926 23:53:23.244735   66389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem
	I0926 23:53:23.244768   66389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/cert.pem (1123 bytes)
	I0926 23:53:23.244877   66389 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem, removing ...
	I0926 23:53:23.244889   66389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem
	I0926 23:53:23.244930   66389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-6020/.minikube/key.pem (1675 bytes)
	I0926 23:53:23.245040   66389 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem org=jenkins.bridge-421834 san=[127.0.0.1 192.168.61.22 bridge-421834 localhost minikube]
	I0926 23:53:23.618556   66389 provision.go:177] copyRemoteCerts
	I0926 23:53:23.618624   66389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 23:53:23.618646   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:23.622767   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:23.623330   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:23.623361   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:23.623653   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:23.623913   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:23.624121   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:23.624261   66389 sshutil.go:53] new ssh client: &{IP:192.168.61.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/id_rsa Username:docker}
	I0926 23:53:23.723185   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0926 23:53:23.788584   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 23:53:23.842082   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0926 23:53:23.880662   66389 provision.go:87] duration metric: took 644.71758ms to configureAuth
	I0926 23:53:23.880692   66389 buildroot.go:189] setting minikube options for container-runtime
	I0926 23:53:23.880916   66389 config.go:182] Loaded profile config "bridge-421834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:53:23.880994   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:23.884495   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:23.885063   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:23.885098   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:23.885463   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:23.885699   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:23.885924   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:23.886122   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:23.886522   66389 main.go:141] libmachine: Using SSH client type: native
	I0926 23:53:23.886722   66389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.22 22 <nil> <nil>}
	I0926 23:53:23.886736   66389 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0926 23:53:24.148919   66389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0926 23:53:24.148950   66389 main.go:141] libmachine: Checking connection to Docker...
	I0926 23:53:24.148961   66389 main.go:141] libmachine: (bridge-421834) Calling .GetURL
	I0926 23:53:24.150275   66389 main.go:141] libmachine: (bridge-421834) DBG | using libvirt version 8000000
	I0926 23:53:24.153008   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.153384   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:24.153432   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.153622   66389 main.go:141] libmachine: Docker is up and running!
	I0926 23:53:24.153636   66389 main.go:141] libmachine: Reticulating splines...
	I0926 23:53:24.153642   66389 client.go:171] duration metric: took 20.80850247s to LocalClient.Create
	I0926 23:53:24.153664   66389 start.go:167] duration metric: took 20.808590624s to libmachine.API.Create "bridge-421834"
	I0926 23:53:24.153671   66389 start.go:293] postStartSetup for "bridge-421834" (driver="kvm2")
	I0926 23:53:24.153679   66389 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 23:53:24.153702   66389 main.go:141] libmachine: (bridge-421834) Calling .DriverName
	I0926 23:53:24.153959   66389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 23:53:24.153981   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:24.157161   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.157549   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:24.157581   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.157747   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:24.157970   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:24.158135   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:24.158262   66389 sshutil.go:53] new ssh client: &{IP:192.168.61.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/id_rsa Username:docker}
	I0926 23:53:24.242760   66389 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 23:53:24.248423   66389 info.go:137] Remote host: Buildroot 2025.02
	I0926 23:53:24.248454   66389 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-6020/.minikube/addons for local assets ...
	I0926 23:53:24.248546   66389 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-6020/.minikube/files for local assets ...
	I0926 23:53:24.248672   66389 filesync.go:149] local asset: /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem -> 99142.pem in /etc/ssl/certs
	I0926 23:53:24.248843   66389 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 23:53:24.261877   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem --> /etc/ssl/certs/99142.pem (1708 bytes)
	I0926 23:53:24.294992   66389 start.go:296] duration metric: took 141.309355ms for postStartSetup
	I0926 23:53:24.295056   66389 main.go:141] libmachine: (bridge-421834) Calling .GetConfigRaw
	I0926 23:53:24.295859   66389 main.go:141] libmachine: (bridge-421834) Calling .GetIP
	I0926 23:53:24.299304   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.299686   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:24.299714   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.300033   66389 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/config.json ...
	I0926 23:53:24.300312   66389 start.go:128] duration metric: took 20.974125431s to createHost
	I0926 23:53:24.300339   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:24.303319   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.303715   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:24.303749   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.303928   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:24.304158   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:24.304347   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:24.304471   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:24.304655   66389 main.go:141] libmachine: Using SSH client type: native
	I0926 23:53:24.304919   66389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.22 22 <nil> <nil>}
	I0926 23:53:24.304931   66389 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 23:53:24.415992   66389 main.go:141] libmachine: SSH cmd err, output: <nil>: 1758930804.377640726
	
	I0926 23:53:24.416017   66389 fix.go:216] guest clock: 1758930804.377640726
	I0926 23:53:24.416024   66389 fix.go:229] Guest: 2025-09-26 23:53:24.377640726 +0000 UTC Remote: 2025-09-26 23:53:24.300327312 +0000 UTC m=+21.115024473 (delta=77.313414ms)
	I0926 23:53:24.416044   66389 fix.go:200] guest clock delta is within tolerance: 77.313414ms
	I0926 23:53:24.416048   66389 start.go:83] releasing machines lock for "bridge-421834", held for 21.089950951s
	I0926 23:53:24.416073   66389 main.go:141] libmachine: (bridge-421834) Calling .DriverName
	I0926 23:53:24.416376   66389 main.go:141] libmachine: (bridge-421834) Calling .GetIP
	I0926 23:53:24.419489   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.419871   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:24.419893   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.420150   66389 main.go:141] libmachine: (bridge-421834) Calling .DriverName
	I0926 23:53:24.420725   66389 main.go:141] libmachine: (bridge-421834) Calling .DriverName
	I0926 23:53:24.420935   66389 main.go:141] libmachine: (bridge-421834) Calling .DriverName
	I0926 23:53:24.421036   66389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 23:53:24.421085   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:24.421190   66389 ssh_runner.go:195] Run: cat /version.json
	I0926 23:53:24.421211   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:24.424480   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.424612   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.424970   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:24.424994   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.425021   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:24.425059   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:24.425157   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:24.425409   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:24.425420   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:24.425603   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:24.425701   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:24.425785   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:24.425891   66389 sshutil.go:53] new ssh client: &{IP:192.168.61.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/id_rsa Username:docker}
	I0926 23:53:24.425962   66389 sshutil.go:53] new ssh client: &{IP:192.168.61.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/id_rsa Username:docker}
	I0926 23:53:24.508618   66389 ssh_runner.go:195] Run: systemctl --version
	I0926 23:53:24.539059   66389 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0926 23:53:24.702907   66389 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 23:53:24.710782   66389 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 23:53:24.710886   66389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 23:53:24.734076   66389 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 23:53:24.734098   66389 start.go:495] detecting cgroup driver to use...
	I0926 23:53:24.734153   66389 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 23:53:24.756401   66389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 23:53:24.778106   66389 docker.go:218] disabling cri-docker service (if available) ...
	I0926 23:53:24.778184   66389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0926 23:53:24.799542   66389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0926 23:53:24.822114   66389 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0926 23:53:24.996303   66389 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0926 23:53:25.224791   66389 docker.go:234] disabling docker service ...
	I0926 23:53:25.224891   66389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0926 23:53:25.243432   66389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0926 23:53:25.259878   66389 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0926 23:53:25.431451   66389 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0926 23:53:25.599621   66389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 23:53:25.616957   66389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 23:53:25.643436   66389 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0926 23:53:25.643526   66389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:53:25.657988   66389 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0926 23:53:25.658047   66389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:53:25.672857   66389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:53:25.688715   66389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:53:25.709342   66389 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 23:53:25.727302   66389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:53:25.744379   66389 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:53:25.770014   66389 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:53:25.784756   66389 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 23:53:25.796461   66389 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 23:53:25.796554   66389 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 23:53:25.823440   66389 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 23:53:25.838860   66389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:53:25.999026   66389 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0926 23:53:26.127274   66389 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0926 23:53:26.127366   66389 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0926 23:53:26.133585   66389 start.go:563] Will wait 60s for crictl version
	I0926 23:53:26.133665   66389 ssh_runner.go:195] Run: which crictl
	I0926 23:53:26.138367   66389 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 23:53:26.189930   66389 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0926 23:53:26.190029   66389 ssh_runner.go:195] Run: crio --version
	I0926 23:53:26.223605   66389 ssh_runner.go:195] Run: crio --version
	I0926 23:53:26.281579   66389 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0926 23:53:26.282887   66389 main.go:141] libmachine: (bridge-421834) Calling .GetIP
	I0926 23:53:26.286371   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:26.286847   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:26.286880   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:26.287176   66389 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0926 23:53:26.292885   66389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 23:53:26.312843   66389 kubeadm.go:883] updating cluster {Name:bridge-421834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-421
834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.22 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 23:53:26.312973   66389 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 23:53:26.313032   66389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:53:26.352138   66389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0926 23:53:26.352234   66389 ssh_runner.go:195] Run: which lz4
	I0926 23:53:26.357081   66389 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0926 23:53:26.362557   66389 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0926 23:53:26.362599   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0926 23:53:28.137751   66389 crio.go:462] duration metric: took 1.780698913s to copy over tarball
	I0926 23:53:28.137885   66389 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0926 23:53:24.614082   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:24.614128   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:24.614139   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:24.614150   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:24.614157   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:24.614163   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:24.614169   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:24.614181   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Running
	I0926 23:53:24.614205   64230 retry.go:31] will retry after 773.437901ms: missing components: kube-dns
	I0926 23:53:25.393682   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:25.393730   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:25.393738   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:25.393746   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:25.393753   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:25.393759   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:25.393779   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:25.393789   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Running
	I0926 23:53:25.393806   64230 retry.go:31] will retry after 1.022431217s: missing components: kube-dns
	I0926 23:53:26.420976   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:26.421026   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:26.421036   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:26.421044   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:26.421052   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:26.421059   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:26.421065   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:26.421073   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Running
	I0926 23:53:26.421092   64230 retry.go:31] will retry after 1.319572477s: missing components: kube-dns
	I0926 23:53:27.746429   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:27.746483   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:27.746496   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:27.746504   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:27.746516   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:27.746523   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:27.746528   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:27.746536   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Running
	I0926 23:53:27.746554   64230 retry.go:31] will retry after 1.82235326s: missing components: kube-dns
	I0926 23:53:30.014215   66389 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.876301553s)
	I0926 23:53:30.014243   66389 crio.go:469] duration metric: took 1.876457477s to extract the tarball
	I0926 23:53:30.014251   66389 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0926 23:53:30.059487   66389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:53:30.115146   66389 crio.go:514] all images are preloaded for cri-o runtime.
	I0926 23:53:30.115176   66389 cache_images.go:85] Images are preloaded, skipping loading
	I0926 23:53:30.115187   66389 kubeadm.go:934] updating node { 192.168.61.22 8443 v1.34.0 crio true true} ...
	I0926 23:53:30.115308   66389 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-421834 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:bridge-421834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0926 23:53:30.115388   66389 ssh_runner.go:195] Run: crio config
	I0926 23:53:30.167607   66389 cni.go:84] Creating CNI manager for "bridge"
	I0926 23:53:30.167639   66389 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 23:53:30.167667   66389 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.22 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-421834 NodeName:bridge-421834 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 23:53:30.167811   66389 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-421834"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.22"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.22"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 23:53:30.167908   66389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0926 23:53:30.180734   66389 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 23:53:30.180805   66389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 23:53:30.194149   66389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0926 23:53:30.217696   66389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 23:53:30.240739   66389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0926 23:53:30.264328   66389 ssh_runner.go:195] Run: grep 192.168.61.22	control-plane.minikube.internal$ /etc/hosts
	I0926 23:53:30.269648   66389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 23:53:30.287057   66389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:53:30.442383   66389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:53:30.479080   66389 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834 for IP: 192.168.61.22
	I0926 23:53:30.479099   66389 certs.go:195] generating shared ca certs ...
	I0926 23:53:30.479113   66389 certs.go:227] acquiring lock for ca certs: {Name:mk9e164f84dd227cf84a459eec91beae2bb75a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:30.479292   66389 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key
	I0926 23:53:30.479364   66389 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key
	I0926 23:53:30.479379   66389 certs.go:257] generating profile certs ...
	I0926 23:53:30.479454   66389 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.key
	I0926 23:53:30.479470   66389 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt with IP's: []
	I0926 23:53:30.614117   66389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt ...
	I0926 23:53:30.614146   66389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: {Name:mk17199a9894daa8e1fa3f5d03c581f8755160b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:30.614322   66389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.key ...
	I0926 23:53:30.614333   66389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.key: {Name:mk5b79db2f23a0408c20d1d2457c1875b85a52ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:30.614409   66389 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.key.f08dc562
	I0926 23:53:30.614425   66389 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.crt.f08dc562 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.22]
	I0926 23:53:30.798397   66389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.crt.f08dc562 ...
	I0926 23:53:30.798424   66389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.crt.f08dc562: {Name:mkbe05319d1195665a56244768f88be845598026 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:30.798593   66389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.key.f08dc562 ...
	I0926 23:53:30.798609   66389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.key.f08dc562: {Name:mkb5acbb2d9a9d4b3b899cbffa845b207e16c72e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:30.798682   66389 certs.go:382] copying /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.crt.f08dc562 -> /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.crt
	I0926 23:53:30.798776   66389 certs.go:386] copying /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.key.f08dc562 -> /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.key
	I0926 23:53:30.798853   66389 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/proxy-client.key
	I0926 23:53:30.798865   66389 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/proxy-client.crt with IP's: []
	I0926 23:53:31.109615   66389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/proxy-client.crt ...
	I0926 23:53:31.109646   66389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/proxy-client.crt: {Name:mkfb5969364c71ffbef78a5f55d4f61e4da59e2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:31.109859   66389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/proxy-client.key ...
	I0926 23:53:31.109877   66389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/proxy-client.key: {Name:mk487c1900a9dcdeef7b8e4b33f6ca9e9211812a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:31.110063   66389 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914.pem (1338 bytes)
	W0926 23:53:31.110102   66389 certs.go:480] ignoring /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914_empty.pem, impossibly tiny 0 bytes
	I0926 23:53:31.110111   66389 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca-key.pem (1679 bytes)
	I0926 23:53:31.110132   66389 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/ca.pem (1082 bytes)
	I0926 23:53:31.110153   66389 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/cert.pem (1123 bytes)
	I0926 23:53:31.110176   66389 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/certs/key.pem (1675 bytes)
	I0926 23:53:31.110212   66389 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem (1708 bytes)
	I0926 23:53:31.110843   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 23:53:31.150929   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0926 23:53:31.199098   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 23:53:31.246838   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 23:53:31.283435   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0926 23:53:31.320685   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 23:53:31.355710   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 23:53:31.393463   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 23:53:31.431568   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/ssl/certs/99142.pem --> /usr/share/ca-certificates/99142.pem (1708 bytes)
	I0926 23:53:31.465636   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 23:53:31.499181   66389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-6020/.minikube/certs/9914.pem --> /usr/share/ca-certificates/9914.pem (1338 bytes)
	I0926 23:53:31.532864   66389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 23:53:31.559868   66389 ssh_runner.go:195] Run: openssl version
	I0926 23:53:31.569935   66389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99142.pem && ln -fs /usr/share/ca-certificates/99142.pem /etc/ssl/certs/99142.pem"
	I0926 23:53:31.587038   66389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99142.pem
	I0926 23:53:31.593896   66389 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 26 22:43 /usr/share/ca-certificates/99142.pem
	I0926 23:53:31.593977   66389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99142.pem
	I0926 23:53:31.603261   66389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99142.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 23:53:31.620931   66389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 23:53:31.636700   66389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:53:31.642977   66389 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:53:31.643036   66389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:53:31.651130   66389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 23:53:31.668900   66389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9914.pem && ln -fs /usr/share/ca-certificates/9914.pem /etc/ssl/certs/9914.pem"
	I0926 23:53:31.687844   66389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9914.pem
	I0926 23:53:31.695764   66389 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 26 22:43 /usr/share/ca-certificates/9914.pem
	I0926 23:53:31.695857   66389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9914.pem
	I0926 23:53:31.705293   66389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9914.pem /etc/ssl/certs/51391683.0"
	I0926 23:53:31.721060   66389 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 23:53:31.726820   66389 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 23:53:31.726909   66389 kubeadm.go:400] StartCluster: {Name:bridge-421834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-421834
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.22 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 23:53:31.726989   66389 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0926 23:53:31.727056   66389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0926 23:53:31.773517   66389 cri.go:89] found id: ""
	I0926 23:53:31.773584   66389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 23:53:31.787140   66389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 23:53:31.802588   66389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 23:53:31.819198   66389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 23:53:31.819220   66389 kubeadm.go:157] found existing configuration files:
	
	I0926 23:53:31.819279   66389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 23:53:31.838315   66389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 23:53:31.838392   66389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 23:53:31.854112   66389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 23:53:31.868738   66389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 23:53:31.868806   66389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 23:53:31.888357   66389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 23:53:31.910570   66389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 23:53:31.910649   66389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 23:53:31.929211   66389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 23:53:31.941990   66389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 23:53:31.942065   66389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 23:53:31.956055   66389 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0926 23:53:32.131816   66389 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 23:53:29.574916   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:29.574958   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:29.574968   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:29.574974   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:29.574979   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:29.574985   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:29.574990   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:29.574994   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Running
	I0926 23:53:29.575015   64230 retry.go:31] will retry after 1.825517142s: missing components: kube-dns
	I0926 23:53:31.553883   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:31.553924   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:31.553933   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:31.553943   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:31.553949   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:31.553957   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:31.553962   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:31.553968   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Running
	I0926 23:53:31.553988   64230 retry.go:31] will retry after 2.267864987s: missing components: kube-dns
	I0926 23:53:33.828310   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:33.828346   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:33.828356   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:33.828364   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:33.828370   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:33.828381   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:33.828390   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:33.828401   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Running
	I0926 23:53:33.828431   64230 retry.go:31] will retry after 2.442062906s: missing components: kube-dns
	I0926 23:53:36.276431   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:36.276464   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:36.276472   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:36.276481   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:36.276489   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:36.276494   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:36.276499   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:36.276506   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Running
	I0926 23:53:36.276528   64230 retry.go:31] will retry after 3.88102041s: missing components: kube-dns
	I0926 23:53:40.166704   64230 system_pods.go:86] 7 kube-system pods found
	I0926 23:53:40.166736   64230 system_pods.go:89] "coredns-66bc5c9577-mqjzf" [3615b35c-1555-475e-9638-493d33edc522] Running
	I0926 23:53:40.166743   64230 system_pods.go:89] "etcd-flannel-421834" [339279c6-f0a0-45f1-b9fe-d9d807bdc020] Running
	I0926 23:53:40.166749   64230 system_pods.go:89] "kube-apiserver-flannel-421834" [4753aa00-d34b-40e3-9854-45ea2a7576a1] Running
	I0926 23:53:40.166755   64230 system_pods.go:89] "kube-controller-manager-flannel-421834" [5f2e9634-8d55-47e8-9798-9de724e05c22] Running
	I0926 23:53:40.166760   64230 system_pods.go:89] "kube-proxy-4mmdk" [d450e678-6c2e-4d03-aaed-896db6c08224] Running
	I0926 23:53:40.166765   64230 system_pods.go:89] "kube-scheduler-flannel-421834" [e6764fde-9c18-4d3b-a620-845d090df18b] Running
	I0926 23:53:40.166769   64230 system_pods.go:89] "storage-provisioner" [9248a14e-179e-4aa9-87ba-8e03a8430609] Running
	I0926 23:53:40.166779   64230 system_pods.go:126] duration metric: took 17.577811922s to wait for k8s-apps to be running ...
	I0926 23:53:40.166788   64230 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 23:53:40.166856   64230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:53:40.191875   64230 system_svc.go:56] duration metric: took 25.068251ms WaitForService to wait for kubelet
	I0926 23:53:40.191923   64230 kubeadm.go:586] duration metric: took 24.475441358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 23:53:40.191944   64230 node_conditions.go:102] verifying NodePressure condition ...
	I0926 23:53:40.196633   64230 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 23:53:40.196668   64230 node_conditions.go:123] node cpu capacity is 2
	I0926 23:53:40.196689   64230 node_conditions.go:105] duration metric: took 4.737391ms to run NodePressure ...
	I0926 23:53:40.196703   64230 start.go:241] waiting for startup goroutines ...
	I0926 23:53:40.196714   64230 start.go:246] waiting for cluster config update ...
	I0926 23:53:40.196729   64230 start.go:255] writing updated cluster config ...
	I0926 23:53:40.197083   64230 ssh_runner.go:195] Run: rm -f paused
	I0926 23:53:40.205632   64230 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:53:40.211699   64230 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mqjzf" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:40.220181   64230 pod_ready.go:94] pod "coredns-66bc5c9577-mqjzf" is "Ready"
	I0926 23:53:40.220217   64230 pod_ready.go:86] duration metric: took 8.486544ms for pod "coredns-66bc5c9577-mqjzf" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:40.224240   64230 pod_ready.go:83] waiting for pod "etcd-flannel-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:40.233287   64230 pod_ready.go:94] pod "etcd-flannel-421834" is "Ready"
	I0926 23:53:40.233354   64230 pod_ready.go:86] duration metric: took 9.081499ms for pod "etcd-flannel-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:40.237176   64230 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:40.243744   64230 pod_ready.go:94] pod "kube-apiserver-flannel-421834" is "Ready"
	I0926 23:53:40.243771   64230 pod_ready.go:86] duration metric: took 6.565667ms for pod "kube-apiserver-flannel-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:40.246287   64230 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:40.611165   64230 pod_ready.go:94] pod "kube-controller-manager-flannel-421834" is "Ready"
	I0926 23:53:40.611197   64230 pod_ready.go:86] duration metric: took 364.881268ms for pod "kube-controller-manager-flannel-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:40.811021   64230 pod_ready.go:83] waiting for pod "kube-proxy-4mmdk" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:41.210445   64230 pod_ready.go:94] pod "kube-proxy-4mmdk" is "Ready"
	I0926 23:53:41.210487   64230 pod_ready.go:86] duration metric: took 399.43112ms for pod "kube-proxy-4mmdk" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:41.413016   64230 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:41.811194   64230 pod_ready.go:94] pod "kube-scheduler-flannel-421834" is "Ready"
	I0926 23:53:41.811229   64230 pod_ready.go:86] duration metric: took 398.178042ms for pod "kube-scheduler-flannel-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:53:41.811245   64230 pod_ready.go:40] duration metric: took 1.605582664s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:53:41.862469   64230 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 23:53:41.865112   64230 out.go:179] * Done! kubectl is now configured to use "flannel-421834" cluster and "default" namespace by default
	I0926 23:53:44.841714   66389 kubeadm.go:318] [init] Using Kubernetes version: v1.34.0
	I0926 23:53:44.841815   66389 kubeadm.go:318] [preflight] Running pre-flight checks
	I0926 23:53:44.841914   66389 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 23:53:44.842004   66389 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 23:53:44.842131   66389 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0926 23:53:44.842235   66389 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 23:53:44.843943   66389 out.go:252]   - Generating certificates and keys ...
	I0926 23:53:44.844024   66389 kubeadm.go:318] [certs] Using existing ca certificate authority
	I0926 23:53:44.844106   66389 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I0926 23:53:44.844175   66389 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0926 23:53:44.844225   66389 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I0926 23:53:44.844282   66389 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I0926 23:53:44.844326   66389 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I0926 23:53:44.844389   66389 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I0926 23:53:44.844572   66389 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [bridge-421834 localhost] and IPs [192.168.61.22 127.0.0.1 ::1]
	I0926 23:53:44.844659   66389 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I0926 23:53:44.844845   66389 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [bridge-421834 localhost] and IPs [192.168.61.22 127.0.0.1 ::1]
	I0926 23:53:44.844938   66389 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0926 23:53:44.845032   66389 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I0926 23:53:44.845103   66389 kubeadm.go:318] [certs] Generating "sa" key and public key
	I0926 23:53:44.845210   66389 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 23:53:44.845322   66389 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 23:53:44.845413   66389 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 23:53:44.845503   66389 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 23:53:44.845593   66389 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 23:53:44.845704   66389 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 23:53:44.845843   66389 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 23:53:44.845941   66389 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 23:53:44.847026   66389 out.go:252]   - Booting up control plane ...
	I0926 23:53:44.847119   66389 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 23:53:44.847226   66389 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 23:53:44.847299   66389 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 23:53:44.847399   66389 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 23:53:44.847566   66389 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0926 23:53:44.847718   66389 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0926 23:53:44.847805   66389 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 23:53:44.847893   66389 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I0926 23:53:44.848049   66389 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0926 23:53:44.848180   66389 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0926 23:53:44.848245   66389 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001452515s
	I0926 23:53:44.848336   66389 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0926 23:53:44.848413   66389 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.61.22:8443/livez
	I0926 23:53:44.848552   66389 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0926 23:53:44.848656   66389 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0926 23:53:44.848759   66389 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.899153305s
	I0926 23:53:44.848883   66389 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.330601769s
	I0926 23:53:44.848976   66389 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502741118s
	I0926 23:53:44.849097   66389 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 23:53:44.849243   66389 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 23:53:44.849331   66389 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 23:53:44.849526   66389 kubeadm.go:318] [mark-control-plane] Marking the node bridge-421834 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 23:53:44.849603   66389 kubeadm.go:318] [bootstrap-token] Using token: kd6815.ojx3n455o8zykny6
	I0926 23:53:44.850986   66389 out.go:252]   - Configuring RBAC rules ...
	I0926 23:53:44.851099   66389 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 23:53:44.851228   66389 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 23:53:44.851433   66389 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 23:53:44.851642   66389 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 23:53:44.851750   66389 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 23:53:44.851854   66389 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 23:53:44.851998   66389 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 23:53:44.852066   66389 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I0926 23:53:44.852126   66389 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I0926 23:53:44.852132   66389 kubeadm.go:318] 
	I0926 23:53:44.852186   66389 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I0926 23:53:44.852192   66389 kubeadm.go:318] 
	I0926 23:53:44.852279   66389 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I0926 23:53:44.852297   66389 kubeadm.go:318] 
	I0926 23:53:44.852331   66389 kubeadm.go:318]   mkdir -p $HOME/.kube
	I0926 23:53:44.852427   66389 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 23:53:44.852477   66389 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 23:53:44.852487   66389 kubeadm.go:318] 
	I0926 23:53:44.852539   66389 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I0926 23:53:44.852543   66389 kubeadm.go:318] 
	I0926 23:53:44.852616   66389 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 23:53:44.852630   66389 kubeadm.go:318] 
	I0926 23:53:44.852700   66389 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I0926 23:53:44.852769   66389 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 23:53:44.852855   66389 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 23:53:44.852862   66389 kubeadm.go:318] 
	I0926 23:53:44.852936   66389 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 23:53:44.853012   66389 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I0926 23:53:44.853018   66389 kubeadm.go:318] 
	I0926 23:53:44.853090   66389 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token kd6815.ojx3n455o8zykny6 \
	I0926 23:53:44.853182   66389 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1bc065dc0287f5108511f75d77232285046ef3d632aca3b6b4eb77abcecaa58 \
	I0926 23:53:44.853211   66389 kubeadm.go:318] 	--control-plane 
	I0926 23:53:44.853217   66389 kubeadm.go:318] 
	I0926 23:53:44.853290   66389 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I0926 23:53:44.853300   66389 kubeadm.go:318] 
	I0926 23:53:44.853415   66389 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token kd6815.ojx3n455o8zykny6 \
	I0926 23:53:44.853569   66389 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1bc065dc0287f5108511f75d77232285046ef3d632aca3b6b4eb77abcecaa58 
	I0926 23:53:44.853606   66389 cni.go:84] Creating CNI manager for "bridge"
	I0926 23:53:44.855848   66389 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0926 23:53:44.856961   66389 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 23:53:44.873344   66389 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 23:53:44.906446   66389 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 23:53:44.906523   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:44.906598   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-421834 minikube.k8s.io/updated_at=2025_09_26T23_53_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47 minikube.k8s.io/name=bridge-421834 minikube.k8s.io/primary=true
	I0926 23:53:45.063070   66389 ops.go:34] apiserver oom_adj: -16
	I0926 23:53:45.063206   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:45.563664   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:46.063658   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:46.563350   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:47.063599   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:47.564089   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:48.063624   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:48.564058   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:49.064075   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:49.563360   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:50.064065   66389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:53:50.236803   66389 kubeadm.go:1113] duration metric: took 5.330336807s to wait for elevateKubeSystemPrivileges
	I0926 23:53:50.236867   66389 kubeadm.go:402] duration metric: took 18.509960989s to StartCluster
	I0926 23:53:50.236892   66389 settings.go:142] acquiring lock: {Name:mk8a46d5a99d51096f5a73696c8b5f570ce357f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:50.236965   66389 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 23:53:50.239258   66389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-6020/kubeconfig: {Name:mkc92bf76d8ba21d0a2b0bb28107401b61549063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:53:50.239549   66389 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.22 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 23:53:50.239613   66389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0926 23:53:50.239650   66389 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 23:53:50.239753   66389 addons.go:69] Setting storage-provisioner=true in profile "bridge-421834"
	I0926 23:53:50.239776   66389 addons.go:69] Setting default-storageclass=true in profile "bridge-421834"
	I0926 23:53:50.239780   66389 addons.go:238] Setting addon storage-provisioner=true in "bridge-421834"
	I0926 23:53:50.239799   66389 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-421834"
	I0926 23:53:50.239811   66389 host.go:66] Checking if "bridge-421834" exists ...
	I0926 23:53:50.239816   66389 config.go:182] Loaded profile config "bridge-421834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:53:50.240362   66389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:50.240364   66389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:50.240408   66389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:50.240427   66389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:50.241120   66389 out.go:179] * Verifying Kubernetes components...
	I0926 23:53:50.242275   66389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:53:50.256055   66389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37089
	I0926 23:53:50.256705   66389 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:50.257229   66389 main.go:141] libmachine: Using API Version  1
	I0926 23:53:50.257252   66389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:50.257648   66389 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:50.257929   66389 main.go:141] libmachine: (bridge-421834) Calling .GetState
	I0926 23:53:50.258084   66389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I0926 23:53:50.258574   66389 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:50.259226   66389 main.go:141] libmachine: Using API Version  1
	I0926 23:53:50.259250   66389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:50.259660   66389 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:50.260226   66389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:50.260268   66389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:50.262397   66389 addons.go:238] Setting addon default-storageclass=true in "bridge-421834"
	I0926 23:53:50.262469   66389 host.go:66] Checking if "bridge-421834" exists ...
	I0926 23:53:50.262906   66389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:50.262949   66389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:50.277885   66389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45437
	I0926 23:53:50.278377   66389 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:50.278487   66389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0926 23:53:50.279074   66389 main.go:141] libmachine: Using API Version  1
	I0926 23:53:50.279098   66389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:50.279165   66389 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:50.279512   66389 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:50.279700   66389 main.go:141] libmachine: Using API Version  1
	I0926 23:53:50.279725   66389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:50.280137   66389 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:50.280296   66389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:53:50.280345   66389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:53:50.280478   66389 main.go:141] libmachine: (bridge-421834) Calling .GetState
	I0926 23:53:50.282817   66389 main.go:141] libmachine: (bridge-421834) Calling .DriverName
	I0926 23:53:50.284486   66389 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 23:53:50.285795   66389 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:53:50.285814   66389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 23:53:50.285845   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:50.289682   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:50.290252   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:50.290314   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:50.290601   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:50.290876   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:50.291069   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:50.291213   66389 sshutil.go:53] new ssh client: &{IP:192.168.61.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/id_rsa Username:docker}
	I0926 23:53:50.297456   66389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39507
	I0926 23:53:50.298056   66389 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:53:50.298699   66389 main.go:141] libmachine: Using API Version  1
	I0926 23:53:50.298723   66389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:53:50.299110   66389 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:53:50.299293   66389 main.go:141] libmachine: (bridge-421834) Calling .GetState
	I0926 23:53:50.301249   66389 main.go:141] libmachine: (bridge-421834) Calling .DriverName
	I0926 23:53:50.301483   66389 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 23:53:50.301502   66389 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 23:53:50.301519   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHHostname
	I0926 23:53:50.305236   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:50.305851   66389 main.go:141] libmachine: (bridge-421834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:cf:e4", ip: ""} in network mk-bridge-421834: {Iface:virbr3 ExpiryTime:2025-09-27 00:53:21 +0000 UTC Type:0 Mac:52:54:00:35:cf:e4 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:bridge-421834 Clientid:01:52:54:00:35:cf:e4}
	I0926 23:53:50.305879   66389 main.go:141] libmachine: (bridge-421834) DBG | domain bridge-421834 has defined IP address 192.168.61.22 and MAC address 52:54:00:35:cf:e4 in network mk-bridge-421834
	I0926 23:53:50.306072   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHPort
	I0926 23:53:50.306224   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHKeyPath
	I0926 23:53:50.306355   66389 main.go:141] libmachine: (bridge-421834) Calling .GetSSHUsername
	I0926 23:53:50.306457   66389 sshutil.go:53] new ssh client: &{IP:192.168.61.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/bridge-421834/id_rsa Username:docker}
	I0926 23:53:50.712116   66389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 23:53:50.752112   66389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:53:50.752209   66389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0926 23:53:51.031357   66389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:53:51.373443   66389 main.go:141] libmachine: Making call to close driver server
	I0926 23:53:51.373476   66389 main.go:141] libmachine: (bridge-421834) Calling .Close
	I0926 23:53:51.373784   66389 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:53:51.373799   66389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:53:51.373808   66389 main.go:141] libmachine: Making call to close driver server
	I0926 23:53:51.373816   66389 main.go:141] libmachine: (bridge-421834) Calling .Close
	I0926 23:53:51.374092   66389 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:53:51.374105   66389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:53:51.374738   66389 node_ready.go:35] waiting up to 15m0s for node "bridge-421834" to be "Ready" ...
	I0926 23:53:51.398770   66389 main.go:141] libmachine: Making call to close driver server
	I0926 23:53:51.398799   66389 main.go:141] libmachine: (bridge-421834) Calling .Close
	I0926 23:53:51.399100   66389 main.go:141] libmachine: (bridge-421834) DBG | Closing plugin on server side
	I0926 23:53:51.399144   66389 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:53:51.399153   66389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:53:51.400667   66389 node_ready.go:49] node "bridge-421834" is "Ready"
	I0926 23:53:51.400698   66389 node_ready.go:38] duration metric: took 25.944029ms for node "bridge-421834" to be "Ready" ...
	I0926 23:53:51.400723   66389 api_server.go:52] waiting for apiserver process to appear ...
	I0926 23:53:51.400782   66389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:53:51.803695   66389 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.051437014s)
	I0926 23:53:51.803741   66389 start.go:976] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0926 23:53:52.332402   66389 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-421834" context rescaled to 1 replicas
	I0926 23:53:52.346710   66389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.315306237s)
	I0926 23:53:52.346793   66389 main.go:141] libmachine: Making call to close driver server
	I0926 23:53:52.346798   66389 api_server.go:72] duration metric: took 2.107212078s to wait for apiserver process to appear ...
	I0926 23:53:52.346813   66389 main.go:141] libmachine: (bridge-421834) Calling .Close
	I0926 23:53:52.346821   66389 api_server.go:88] waiting for apiserver healthz status ...
	I0926 23:53:52.346950   66389 api_server.go:253] Checking apiserver healthz at https://192.168.61.22:8443/healthz ...
	I0926 23:53:52.347191   66389 main.go:141] libmachine: (bridge-421834) DBG | Closing plugin on server side
	I0926 23:53:52.347197   66389 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:53:52.347217   66389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:53:52.347229   66389 main.go:141] libmachine: Making call to close driver server
	I0926 23:53:52.347240   66389 main.go:141] libmachine: (bridge-421834) Calling .Close
	I0926 23:53:52.347568   66389 main.go:141] libmachine: (bridge-421834) DBG | Closing plugin on server side
	I0926 23:53:52.347609   66389 main.go:141] libmachine: Successfully made call to close driver server
	I0926 23:53:52.347621   66389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0926 23:53:52.349314   66389 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0926 23:53:52.350678   66389 addons.go:514] duration metric: took 2.1110452s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0926 23:53:52.366659   66389 api_server.go:279] https://192.168.61.22:8443/healthz returned 200:
	ok
	I0926 23:53:52.370284   66389 api_server.go:141] control plane version: v1.34.0
	I0926 23:53:52.370317   66389 api_server.go:131] duration metric: took 23.391786ms to wait for apiserver health ...
	I0926 23:53:52.370337   66389 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 23:53:52.379615   66389 system_pods.go:59] 8 kube-system pods found
	I0926 23:53:52.379691   66389 system_pods.go:61] "coredns-66bc5c9577-49fzk" [050d4bb7-2fdd-4189-bfae-c181677f0679] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:52.379712   66389 system_pods.go:61] "coredns-66bc5c9577-xw5nt" [08e9bf35-7bae-413d-be70-89061055577c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:52.379733   66389 system_pods.go:61] "etcd-bridge-421834" [b99ecd0b-dc3b-4a78-96e6-5a8be43fabef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:53:52.379743   66389 system_pods.go:61] "kube-apiserver-bridge-421834" [424b8a50-2bf9-4266-801b-34046706404f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:53:52.379774   66389 system_pods.go:61] "kube-controller-manager-bridge-421834" [cf084341-1e40-4135-b0ef-1256ede5ba8e] Running
	I0926 23:53:52.379784   66389 system_pods.go:61] "kube-proxy-x9dj6" [4cc990be-9a6e-45a7-b922-3fe73d1d9dd3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 23:53:52.379789   66389 system_pods.go:61] "kube-scheduler-bridge-421834" [1c250e7e-aa0e-4500-90b8-ab40d07e0806] Running
	I0926 23:53:52.379796   66389 system_pods.go:61] "storage-provisioner" [d6f2c195-dcfe-4d02-9f7d-d41adcd6dd65] Pending
	I0926 23:53:52.379805   66389 system_pods.go:74] duration metric: took 9.459647ms to wait for pod list to return data ...
	I0926 23:53:52.379842   66389 default_sa.go:34] waiting for default service account to be created ...
	I0926 23:53:52.402179   66389 default_sa.go:45] found service account: "default"
	I0926 23:53:52.402215   66389 default_sa.go:55] duration metric: took 22.362847ms for default service account to be created ...
	I0926 23:53:52.402228   66389 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 23:53:52.409473   66389 system_pods.go:86] 8 kube-system pods found
	I0926 23:53:52.409514   66389 system_pods.go:89] "coredns-66bc5c9577-49fzk" [050d4bb7-2fdd-4189-bfae-c181677f0679] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:52.409542   66389 system_pods.go:89] "coredns-66bc5c9577-xw5nt" [08e9bf35-7bae-413d-be70-89061055577c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:52.409560   66389 system_pods.go:89] "etcd-bridge-421834" [b99ecd0b-dc3b-4a78-96e6-5a8be43fabef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:53:52.409574   66389 system_pods.go:89] "kube-apiserver-bridge-421834" [424b8a50-2bf9-4266-801b-34046706404f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:53:52.409584   66389 system_pods.go:89] "kube-controller-manager-bridge-421834" [cf084341-1e40-4135-b0ef-1256ede5ba8e] Running
	I0926 23:53:52.409596   66389 system_pods.go:89] "kube-proxy-x9dj6" [4cc990be-9a6e-45a7-b922-3fe73d1d9dd3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0926 23:53:52.409606   66389 system_pods.go:89] "kube-scheduler-bridge-421834" [1c250e7e-aa0e-4500-90b8-ab40d07e0806] Running
	I0926 23:53:52.409619   66389 system_pods.go:89] "storage-provisioner" [d6f2c195-dcfe-4d02-9f7d-d41adcd6dd65] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:53:52.409654   66389 retry.go:31] will retry after 207.463574ms: missing components: kube-dns, kube-proxy
	I0926 23:53:52.623107   66389 system_pods.go:86] 8 kube-system pods found
	I0926 23:53:52.623139   66389 system_pods.go:89] "coredns-66bc5c9577-49fzk" [050d4bb7-2fdd-4189-bfae-c181677f0679] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:52.623146   66389 system_pods.go:89] "coredns-66bc5c9577-xw5nt" [08e9bf35-7bae-413d-be70-89061055577c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:53:52.623153   66389 system_pods.go:89] "etcd-bridge-421834" [b99ecd0b-dc3b-4a78-96e6-5a8be43fabef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:53:52.623163   66389 system_pods.go:89] "kube-apiserver-bridge-421834" [424b8a50-2bf9-4266-801b-34046706404f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:53:52.623167   66389 system_pods.go:89] "kube-controller-manager-bridge-421834" [cf084341-1e40-4135-b0ef-1256ede5ba8e] Running
	I0926 23:53:52.623171   66389 system_pods.go:89] "kube-proxy-x9dj6" [4cc990be-9a6e-45a7-b922-3fe73d1d9dd3] Running
	I0926 23:53:52.623175   66389 system_pods.go:89] "kube-scheduler-bridge-421834" [1c250e7e-aa0e-4500-90b8-ab40d07e0806] Running
	I0926 23:53:52.623181   66389 system_pods.go:89] "storage-provisioner" [d6f2c195-dcfe-4d02-9f7d-d41adcd6dd65] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:53:52.623191   66389 system_pods.go:126] duration metric: took 220.956849ms to wait for k8s-apps to be running ...
	I0926 23:53:52.623206   66389 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 23:53:52.623255   66389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:53:52.651319   66389 system_svc.go:56] duration metric: took 28.102795ms WaitForService to wait for kubelet
	I0926 23:53:52.651349   66389 kubeadm.go:586] duration metric: took 2.411767704s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 23:53:52.651365   66389 node_conditions.go:102] verifying NodePressure condition ...
	I0926 23:53:52.657789   66389 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 23:53:52.657814   66389 node_conditions.go:123] node cpu capacity is 2
	I0926 23:53:52.657855   66389 node_conditions.go:105] duration metric: took 6.485077ms to run NodePressure ...
	I0926 23:53:52.657868   66389 start.go:241] waiting for startup goroutines ...
	I0926 23:53:52.657875   66389 start.go:246] waiting for cluster config update ...
	I0926 23:53:52.657885   66389 start.go:255] writing updated cluster config ...
	I0926 23:53:52.658164   66389 ssh_runner.go:195] Run: rm -f paused
	I0926 23:53:52.672097   66389 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:53:52.678053   66389 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-49fzk" in "kube-system" namespace to be "Ready" or be gone ...
	W0926 23:53:54.685216   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:53:57.185562   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:53:59.687662   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:02.184172   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:04.186722   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:06.186876   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:08.685625   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:10.686610   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:12.687726   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:15.185017   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:17.185233   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:19.185655   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:21.192607   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:23.685450   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:26.185796   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	W0926 23:54:28.185952   66389 pod_ready.go:104] pod "coredns-66bc5c9577-49fzk" is not "Ready", error: <nil>
	I0926 23:54:30.187086   66389 pod_ready.go:94] pod "coredns-66bc5c9577-49fzk" is "Ready"
	I0926 23:54:30.187117   66389 pod_ready.go:86] duration metric: took 37.509023928s for pod "coredns-66bc5c9577-49fzk" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:30.187131   66389 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xw5nt" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:30.189279   66389 pod_ready.go:99] pod "coredns-66bc5c9577-xw5nt" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-xw5nt" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-xw5nt" not found
	I0926 23:54:30.189299   66389 pod_ready.go:86] duration metric: took 2.161005ms for pod "coredns-66bc5c9577-xw5nt" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:30.192820   66389 pod_ready.go:83] waiting for pod "etcd-bridge-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:30.197725   66389 pod_ready.go:94] pod "etcd-bridge-421834" is "Ready"
	I0926 23:54:30.197751   66389 pod_ready.go:86] duration metric: took 4.89165ms for pod "etcd-bridge-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:30.201190   66389 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:30.205123   66389 pod_ready.go:94] pod "kube-apiserver-bridge-421834" is "Ready"
	I0926 23:54:30.205149   66389 pod_ready.go:86] duration metric: took 3.936999ms for pod "kube-apiserver-bridge-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:30.207292   66389 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:30.582600   66389 pod_ready.go:94] pod "kube-controller-manager-bridge-421834" is "Ready"
	I0926 23:54:30.582626   66389 pod_ready.go:86] duration metric: took 375.315209ms for pod "kube-controller-manager-bridge-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:30.782631   66389 pod_ready.go:83] waiting for pod "kube-proxy-x9dj6" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:31.183190   66389 pod_ready.go:94] pod "kube-proxy-x9dj6" is "Ready"
	I0926 23:54:31.183217   66389 pod_ready.go:86] duration metric: took 400.559229ms for pod "kube-proxy-x9dj6" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:31.384947   66389 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:31.783377   66389 pod_ready.go:94] pod "kube-scheduler-bridge-421834" is "Ready"
	I0926 23:54:31.783401   66389 pod_ready.go:86] duration metric: took 398.422954ms for pod "kube-scheduler-bridge-421834" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:54:31.783412   66389 pod_ready.go:40] duration metric: took 39.111274508s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:54:31.828635   66389 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 23:54:31.830519   66389 out.go:179] * Done! kubectl is now configured to use "bridge-421834" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.669241969Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758931860669216148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34105ae2-08a4-4860-8c84-29dcb0ed29c3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.670202813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71616f35-6db0-421a-9336-5965e8a4012e name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.670273485Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71616f35-6db0-421a-9336-5965e8a4012e name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.670559335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:016e883d41d0bbae219c11eda572dba5e41c6cd53f9c78e28db481fef48f5fbf,PodSandboxId:c445e9cac1dcfef340df87c7e218d2609e2e4236913d8e9f9d8ae7e87c662283,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1758931770825209887,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-6kgrc,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ecd98bba-a3d7-4bea-aa51-e341fb975527,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c71795d8becf30d65fcd1f8b7adfa9b3fc321ffd2ec7243cd813f2c5f096f4b5,PodSandboxId:8c1501f78c3823bbe9dbcd21d6c98b9ab311436777802ee6c275aca88b1bd44f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758930803680150171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8df1ce95-e9fb-4055-b0b2-1cba8175d80c,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67eb663ec36d3cde173a9100ade61abd7d779cf06b880e3cacaa69dd25c4dcb2,PodSandboxId:8d8e1046ef7dd6222bebe31aa2012098ad966dc405a9f8380e67c08a237f8630,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758930791230260177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4c698e-9546-4a16-9319-156239442417,},Annotations:map[string]string{io.
kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2abc109f0d27c6d2f5c20aa4429a0852eb7ac16d39609db9b8af781053c1f39,PodSandboxId:3ac27b2b2310719da59887987e97a0fe651965c989fd22e1ad82b4b149458c3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758930767770242813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2bp42,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fd6329-c2a2-4889-aadd-16436fea9fa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c257886ddfab73c46628c5c6cdb1271b5c9afb3639019fbefe26c7af272f819,PodSandboxId:8d8e1046ef7dd6222bebe31aa2012098ad966dc405a9f8380e67c08a237f8630,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867
d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758930760409703986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4c698e-9546-4a16-9319-156239442417,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110a56d3c6a02ad2751fa18e910a6697249fe651168060c235f3e1b24104746,PodSandboxId:6e61bfadd8735788b718d084a1e2b26371b301242af9d57928e6798512eefec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,
State:CONTAINER_RUNNING,CreatedAt:1758930760394041601,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26dzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6f48ab8-1b63-4c01-bab6-cb0962763b4a,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8dd25e029012d2f44f7e2da77fdb0b8662789df2d0a107ad874d2db1abec8b4,PodSandboxId:1cf2d0cd78ed6a4668746bf0c80754bfdd3005bb7b27e8e45d78480cfedb4bda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,Create
dAt:1758930754960723313,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a7d38056283e66eab9810b9763a9a84,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2064521c7e43dcd5a01744b17ef543a0fcea126d5136818f5467db3fd843a708,PodSandboxId:ac1329cc6d82c7b021057be4c0f25700064055646ce76cfff32b1d9822272035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758930754936260164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67ad5b3639dc2d0ef7211c318ff3cec,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:459f9669b0d52f2daa02064c61ac476b12a35a2d3e63320b0114bd6c1ea91282,PodSandboxId:45d86926188eac76eb47281704b3a99fbf874c49e5e77c94da3d59aef2644ace,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:
5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758930754978753150,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed81a0af0e8e21f6f9d3352b1db239d7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb206142ca732fa558448af09c2ae58e659129be2cbc504fe86e8157c2dd6a7,PodSandboxId:6e718e678d86fe7991b81e6b13b59f11e17069b28a1cbbfcdfaf529d473
ec525,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758930754909305217,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4798d6709d5d1af1dddaa09e0563abcd,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:
74" id=71616f35-6db0-421a-9336-5965e8a4012e name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.714541371Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba78d455-5c98-470c-8f84-95572e11c63d name=/runtime.v1.RuntimeService/Version
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.714643924Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba78d455-5c98-470c-8f84-95572e11c63d name=/runtime.v1.RuntimeService/Version
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.715903333Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e550d563-e9c1-49b2-b2c8-d02378087b93 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.716347931Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758931860716327044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e550d563-e9c1-49b2-b2c8-d02378087b93 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.717200881Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e07f7c22-af60-4733-8bf9-170741f559b8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.717412481Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e07f7c22-af60-4733-8bf9-170741f559b8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.717859700Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:016e883d41d0bbae219c11eda572dba5e41c6cd53f9c78e28db481fef48f5fbf,PodSandboxId:c445e9cac1dcfef340df87c7e218d2609e2e4236913d8e9f9d8ae7e87c662283,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1758931770825209887,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-6kgrc,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ecd98bba-a3d7-4bea-aa51-e341fb975527,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c71795d8becf30d65fcd1f8b7adfa9b3fc321ffd2ec7243cd813f2c5f096f4b5,PodSandboxId:8c1501f78c3823bbe9dbcd21d6c98b9ab311436777802ee6c275aca88b1bd44f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758930803680150171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8df1ce95-e9fb-4055-b0b2-1cba8175d80c,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67eb663ec36d3cde173a9100ade61abd7d779cf06b880e3cacaa69dd25c4dcb2,PodSandboxId:8d8e1046ef7dd6222bebe31aa2012098ad966dc405a9f8380e67c08a237f8630,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758930791230260177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4c698e-9546-4a16-9319-156239442417,},Annotations:map[string]string{io.
kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2abc109f0d27c6d2f5c20aa4429a0852eb7ac16d39609db9b8af781053c1f39,PodSandboxId:3ac27b2b2310719da59887987e97a0fe651965c989fd22e1ad82b4b149458c3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758930767770242813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2bp42,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fd6329-c2a2-4889-aadd-16436fea9fa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c257886ddfab73c46628c5c6cdb1271b5c9afb3639019fbefe26c7af272f819,PodSandboxId:8d8e1046ef7dd6222bebe31aa2012098ad966dc405a9f8380e67c08a237f8630,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867
d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758930760409703986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4c698e-9546-4a16-9319-156239442417,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110a56d3c6a02ad2751fa18e910a6697249fe651168060c235f3e1b24104746,PodSandboxId:6e61bfadd8735788b718d084a1e2b26371b301242af9d57928e6798512eefec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,
State:CONTAINER_RUNNING,CreatedAt:1758930760394041601,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26dzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6f48ab8-1b63-4c01-bab6-cb0962763b4a,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8dd25e029012d2f44f7e2da77fdb0b8662789df2d0a107ad874d2db1abec8b4,PodSandboxId:1cf2d0cd78ed6a4668746bf0c80754bfdd3005bb7b27e8e45d78480cfedb4bda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,Create
dAt:1758930754960723313,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a7d38056283e66eab9810b9763a9a84,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2064521c7e43dcd5a01744b17ef543a0fcea126d5136818f5467db3fd843a708,PodSandboxId:ac1329cc6d82c7b021057be4c0f25700064055646ce76cfff32b1d9822272035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758930754936260164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67ad5b3639dc2d0ef7211c318ff3cec,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:459f9669b0d52f2daa02064c61ac476b12a35a2d3e63320b0114bd6c1ea91282,PodSandboxId:45d86926188eac76eb47281704b3a99fbf874c49e5e77c94da3d59aef2644ace,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:
5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758930754978753150,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed81a0af0e8e21f6f9d3352b1db239d7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb206142ca732fa558448af09c2ae58e659129be2cbc504fe86e8157c2dd6a7,PodSandboxId:6e718e678d86fe7991b81e6b13b59f11e17069b28a1cbbfcdfaf529d473
ec525,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758930754909305217,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4798d6709d5d1af1dddaa09e0563abcd,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:
74" id=e07f7c22-af60-4733-8bf9-170741f559b8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.758297272Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0b9a89f8-374a-4118-847a-436000858ec8 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.758391272Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b9a89f8-374a-4118-847a-436000858ec8 name=/runtime.v1.RuntimeService/Version
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.760223741Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=698832bf-4b5c-4fb1-b677-65c395692ec4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.760780702Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758931860760752748,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=698832bf-4b5c-4fb1-b677-65c395692ec4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.761229419Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9e782e5-5c6c-4d5f-8d65-7befb0f2f94c name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.761305202Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9e782e5-5c6c-4d5f-8d65-7befb0f2f94c name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.761616006Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:016e883d41d0bbae219c11eda572dba5e41c6cd53f9c78e28db481fef48f5fbf,PodSandboxId:c445e9cac1dcfef340df87c7e218d2609e2e4236913d8e9f9d8ae7e87c662283,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1758931770825209887,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-6kgrc,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ecd98bba-a3d7-4bea-aa51-e341fb975527,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c71795d8becf30d65fcd1f8b7adfa9b3fc321ffd2ec7243cd813f2c5f096f4b5,PodSandboxId:8c1501f78c3823bbe9dbcd21d6c98b9ab311436777802ee6c275aca88b1bd44f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758930803680150171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8df1ce95-e9fb-4055-b0b2-1cba8175d80c,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67eb663ec36d3cde173a9100ade61abd7d779cf06b880e3cacaa69dd25c4dcb2,PodSandboxId:8d8e1046ef7dd6222bebe31aa2012098ad966dc405a9f8380e67c08a237f8630,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758930791230260177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4c698e-9546-4a16-9319-156239442417,},Annotations:map[string]string{io.
kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2abc109f0d27c6d2f5c20aa4429a0852eb7ac16d39609db9b8af781053c1f39,PodSandboxId:3ac27b2b2310719da59887987e97a0fe651965c989fd22e1ad82b4b149458c3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758930767770242813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2bp42,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fd6329-c2a2-4889-aadd-16436fea9fa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c257886ddfab73c46628c5c6cdb1271b5c9afb3639019fbefe26c7af272f819,PodSandboxId:8d8e1046ef7dd6222bebe31aa2012098ad966dc405a9f8380e67c08a237f8630,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867
d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758930760409703986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4c698e-9546-4a16-9319-156239442417,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110a56d3c6a02ad2751fa18e910a6697249fe651168060c235f3e1b24104746,PodSandboxId:6e61bfadd8735788b718d084a1e2b26371b301242af9d57928e6798512eefec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,
State:CONTAINER_RUNNING,CreatedAt:1758930760394041601,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26dzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6f48ab8-1b63-4c01-bab6-cb0962763b4a,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8dd25e029012d2f44f7e2da77fdb0b8662789df2d0a107ad874d2db1abec8b4,PodSandboxId:1cf2d0cd78ed6a4668746bf0c80754bfdd3005bb7b27e8e45d78480cfedb4bda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,Create
dAt:1758930754960723313,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a7d38056283e66eab9810b9763a9a84,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2064521c7e43dcd5a01744b17ef543a0fcea126d5136818f5467db3fd843a708,PodSandboxId:ac1329cc6d82c7b021057be4c0f25700064055646ce76cfff32b1d9822272035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758930754936260164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67ad5b3639dc2d0ef7211c318ff3cec,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:459f9669b0d52f2daa02064c61ac476b12a35a2d3e63320b0114bd6c1ea91282,PodSandboxId:45d86926188eac76eb47281704b3a99fbf874c49e5e77c94da3d59aef2644ace,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:
5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758930754978753150,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed81a0af0e8e21f6f9d3352b1db239d7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb206142ca732fa558448af09c2ae58e659129be2cbc504fe86e8157c2dd6a7,PodSandboxId:6e718e678d86fe7991b81e6b13b59f11e17069b28a1cbbfcdfaf529d473
ec525,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758930754909305217,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4798d6709d5d1af1dddaa09e0563abcd,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:
74" id=e9e782e5-5c6c-4d5f-8d65-7befb0f2f94c name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.800329935Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3934db8a-836c-4dbb-abe4-ab0c915d2d5d name=/runtime.v1.RuntimeService/Version
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.800400675Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3934db8a-836c-4dbb-abe4-ab0c915d2d5d name=/runtime.v1.RuntimeService/Version
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.802256149Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6694497e-8250-40e7-a459-862b664f9e1e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.802823433Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758931860802798425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6694497e-8250-40e7-a459-862b664f9e1e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.803373280Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c0a5577-07bb-47c7-afc2-7c600d27f9b1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.803559578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c0a5577-07bb-47c7-afc2-7c600d27f9b1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 00:11:00 embed-certs-994238 crio[887]: time="2025-09-27 00:11:00.803961764Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:016e883d41d0bbae219c11eda572dba5e41c6cd53f9c78e28db481fef48f5fbf,PodSandboxId:c445e9cac1dcfef340df87c7e218d2609e2e4236913d8e9f9d8ae7e87c662283,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1758931770825209887,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-6kgrc,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: ecd98bba-a3d7-4bea-aa51-e341fb975527,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c71795d8becf30d65fcd1f8b7adfa9b3fc321ffd2ec7243cd813f2c5f096f4b5,PodSandboxId:8c1501f78c3823bbe9dbcd21d6c98b9ab311436777802ee6c275aca88b1bd44f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758930803680150171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8df1ce95-e9fb-4055-b0b2-1cba8175d80c,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67eb663ec36d3cde173a9100ade61abd7d779cf06b880e3cacaa69dd25c4dcb2,PodSandboxId:8d8e1046ef7dd6222bebe31aa2012098ad966dc405a9f8380e67c08a237f8630,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758930791230260177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4c698e-9546-4a16-9319-156239442417,},Annotations:map[string]string{io.
kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2abc109f0d27c6d2f5c20aa4429a0852eb7ac16d39609db9b8af781053c1f39,PodSandboxId:3ac27b2b2310719da59887987e97a0fe651965c989fd22e1ad82b4b149458c3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758930767770242813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2bp42,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fd6329-c2a2-4889-aadd-16436fea9fa8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c257886ddfab73c46628c5c6cdb1271b5c9afb3639019fbefe26c7af272f819,PodSandboxId:8d8e1046ef7dd6222bebe31aa2012098ad966dc405a9f8380e67c08a237f8630,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867
d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758930760409703986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4c698e-9546-4a16-9319-156239442417,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4110a56d3c6a02ad2751fa18e910a6697249fe651168060c235f3e1b24104746,PodSandboxId:6e61bfadd8735788b718d084a1e2b26371b301242af9d57928e6798512eefec5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,
State:CONTAINER_RUNNING,CreatedAt:1758930760394041601,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26dzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6f48ab8-1b63-4c01-bab6-cb0962763b4a,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8dd25e029012d2f44f7e2da77fdb0b8662789df2d0a107ad874d2db1abec8b4,PodSandboxId:1cf2d0cd78ed6a4668746bf0c80754bfdd3005bb7b27e8e45d78480cfedb4bda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,Create
dAt:1758930754960723313,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a7d38056283e66eab9810b9763a9a84,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2064521c7e43dcd5a01744b17ef543a0fcea126d5136818f5467db3fd843a708,PodSandboxId:ac1329cc6d82c7b021057be4c0f25700064055646ce76cfff32b1d9822272035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758930754936260164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67ad5b3639dc2d0ef7211c318ff3cec,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:459f9669b0d52f2daa02064c61ac476b12a35a2d3e63320b0114bd6c1ea91282,PodSandboxId:45d86926188eac76eb47281704b3a99fbf874c49e5e77c94da3d59aef2644ace,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:
5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758930754978753150,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed81a0af0e8e21f6f9d3352b1db239d7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeb206142ca732fa558448af09c2ae58e659129be2cbc504fe86e8157c2dd6a7,PodSandboxId:6e718e678d86fe7991b81e6b13b59f11e17069b28a1cbbfcdfaf529d473
ec525,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758930754909305217,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-994238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4798d6709d5d1af1dddaa09e0563abcd,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:
74" id=5c0a5577-07bb-47c7-afc2-7c600d27f9b1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	016e883d41d0b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                      About a minute ago   Exited              dashboard-metrics-scraper   8                   c445e9cac1dcf       dashboard-metrics-scraper-6ffb444bf9-6kgrc
	c71795d8becf3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   17 minutes ago       Running             busybox                     1                   8c1501f78c382       busybox
	67eb663ec36d3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago       Running             storage-provisioner         2                   8d8e1046ef7dd       storage-provisioner
	f2abc109f0d27       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      18 minutes ago       Running             coredns                     1                   3ac27b2b23107       coredns-66bc5c9577-2bp42
	7c257886ddfab       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago       Exited              storage-provisioner         1                   8d8e1046ef7dd       storage-provisioner
	4110a56d3c6a0       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      18 minutes ago       Running             kube-proxy                  1                   6e61bfadd8735       kube-proxy-26dzh
	459f9669b0d52       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      18 minutes ago       Running             etcd                        1                   45d86926188ea       etcd-embed-certs-994238
	c8dd25e029012       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      18 minutes ago       Running             kube-scheduler              1                   1cf2d0cd78ed6       kube-scheduler-embed-certs-994238
	2064521c7e43d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      18 minutes ago       Running             kube-controller-manager     1                   ac1329cc6d82c       kube-controller-manager-embed-certs-994238
	eeb206142ca73       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      18 minutes ago       Running             kube-apiserver              1                   6e718e678d86f       kube-apiserver-embed-certs-994238
	
	
	==> coredns [f2abc109f0d27c6d2f5c20aa4429a0852eb7ac16d39609db9b8af781053c1f39] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51344 - 11755 "HINFO IN 3138978563286339268.2439510168868357739. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.037962699s
	
	
	==> describe nodes <==
	Name:               embed-certs-994238
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-994238
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=embed-certs-994238
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T23_49_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 23:49:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-994238
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Sep 2025 00:11:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Sep 2025 00:08:50 +0000   Fri, 26 Sep 2025 23:49:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Sep 2025 00:08:50 +0000   Fri, 26 Sep 2025 23:49:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Sep 2025 00:08:50 +0000   Fri, 26 Sep 2025 23:49:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Sep 2025 00:08:50 +0000   Fri, 26 Sep 2025 23:52:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.66
	  Hostname:    embed-certs-994238
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 62d30061cfd044b9b19ed1fea89cb5e1
	  System UUID:                62d30061-cfd0-44b9-b19e-d1fea89cb5e1
	  Boot ID:                    a7877111-27f9-48e9-939a-e7385196adda
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-66bc5c9577-2bp42                      100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     21m
	  kube-system                 etcd-embed-certs-994238                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         21m
	  kube-system                 kube-apiserver-embed-certs-994238             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-embed-certs-994238    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-26dzh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-994238             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-746fcd58dc-nr4tj               100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         20m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-6kgrc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9wwwt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-994238 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-994238 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-994238 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     21m                kubelet          Node embed-certs-994238 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  21m                kubelet          Node embed-certs-994238 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-994238 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 21m                kubelet          Starting kubelet.
	  Normal   NodeReady                21m                kubelet          Node embed-certs-994238 status is now: NodeReady
	  Normal   RegisteredNode           21m                node-controller  Node embed-certs-994238 event: Registered Node embed-certs-994238 in Controller
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-994238 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-994238 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node embed-certs-994238 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18m                kubelet          Node embed-certs-994238 has been rebooted, boot id: a7877111-27f9-48e9-939a-e7385196adda
	  Normal   RegisteredNode           18m                node-controller  Node embed-certs-994238 event: Registered Node embed-certs-994238 in Controller
	
	
	==> dmesg <==
	[Sep26 23:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001542] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003145] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.786625] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.114348] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.126859] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.797409] kauditd_printk_skb: 196 callbacks suppressed
	[  +0.282526] kauditd_printk_skb: 239 callbacks suppressed
	[  +3.678837] kauditd_printk_skb: 110 callbacks suppressed
	[Sep26 23:53] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.252062] kauditd_printk_skb: 11 callbacks suppressed
	[ +18.469727] kauditd_printk_skb: 49 callbacks suppressed
	[Sep26 23:54] kauditd_printk_skb: 6 callbacks suppressed
	[ +46.997540] kauditd_printk_skb: 6 callbacks suppressed
	[Sep26 23:56] kauditd_printk_skb: 6 callbacks suppressed
	[Sep26 23:59] kauditd_printk_skb: 6 callbacks suppressed
	[Sep27 00:04] kauditd_printk_skb: 6 callbacks suppressed
	[Sep27 00:09] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [459f9669b0d52f2daa02064c61ac476b12a35a2d3e63320b0114bd6c1ea91282] <==
	{"level":"warn","ts":"2025-09-26T23:52:38.161990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:52:38.171406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:52:38.278282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T23:52:56.782653Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.772537ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16636747309780032576 > lease_revoke:<id:66e19988715fda8d>","response":"size:28"}
	{"level":"info","ts":"2025-09-26T23:52:56.782835Z","caller":"traceutil/trace.go:172","msg":"trace[517451159] linearizableReadLoop","detail":"{readStateIndex:757; appliedIndex:756; }","duration":"113.173371ms","start":"2025-09-26T23:52:56.669646Z","end":"2025-09-26T23:52:56.782819Z","steps":["trace[517451159] 'read index received'  (duration: 26.204µs)","trace[517451159] 'applied index is now lower than readState.Index'  (duration: 113.146143ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T23:52:56.782903Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.241905ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T23:52:56.782918Z","caller":"traceutil/trace.go:172","msg":"trace[1847920199] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:705; }","duration":"113.271312ms","start":"2025-09-26T23:52:56.669642Z","end":"2025-09-26T23:52:56.782913Z","steps":["trace[1847920199] 'agreement among raft nodes before linearized reading'  (duration: 113.217896ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:52:56.932719Z","caller":"traceutil/trace.go:172","msg":"trace[1397283063] transaction","detail":"{read_only:false; response_revision:706; number_of_response:1; }","duration":"114.677065ms","start":"2025-09-26T23:52:56.818027Z","end":"2025-09-26T23:52:56.932704Z","steps":["trace[1397283063] 'process raft request'  (duration: 114.540397ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:52:57.088245Z","caller":"traceutil/trace.go:172","msg":"trace[852881930] linearizableReadLoop","detail":"{readStateIndex:758; appliedIndex:758; }","duration":"104.51992ms","start":"2025-09-26T23:52:56.983705Z","end":"2025-09-26T23:52:57.088225Z","steps":["trace[852881930] 'read index received'  (duration: 104.513833ms)","trace[852881930] 'applied index is now lower than readState.Index'  (duration: 5.06µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T23:52:57.094106Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.401026ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T23:52:57.094229Z","caller":"traceutil/trace.go:172","msg":"trace[1856526367] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:706; }","duration":"110.542193ms","start":"2025-09-26T23:52:56.983674Z","end":"2025-09-26T23:52:57.094216Z","steps":["trace[1856526367] 'agreement among raft nodes before linearized reading'  (duration: 104.679881ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:52:57.094785Z","caller":"traceutil/trace.go:172","msg":"trace[157560446] transaction","detail":"{read_only:false; response_revision:707; number_of_response:1; }","duration":"269.874187ms","start":"2025-09-26T23:52:56.824900Z","end":"2025-09-26T23:52:57.094775Z","steps":["trace[157560446] 'process raft request'  (duration: 263.493408ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:53:20.171932Z","caller":"traceutil/trace.go:172","msg":"trace[1835746828] linearizableReadLoop","detail":"{readStateIndex:781; appliedIndex:781; }","duration":"189.080712ms","start":"2025-09-26T23:53:19.982835Z","end":"2025-09-26T23:53:20.171915Z","steps":["trace[1835746828] 'read index received'  (duration: 189.076373ms)","trace[1835746828] 'applied index is now lower than readState.Index'  (duration: 3.744µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T23:53:20.172271Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"189.341662ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T23:53:20.172646Z","caller":"traceutil/trace.go:172","msg":"trace[1485630491] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:725; }","duration":"189.740658ms","start":"2025-09-26T23:53:19.982830Z","end":"2025-09-26T23:53:20.172571Z","steps":["trace[1485630491] 'agreement among raft nodes before linearized reading'  (duration: 189.316758ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:53:20.172665Z","caller":"traceutil/trace.go:172","msg":"trace[1818633723] transaction","detail":"{read_only:false; response_revision:726; number_of_response:1; }","duration":"194.00584ms","start":"2025-09-26T23:53:19.978643Z","end":"2025-09-26T23:53:20.172648Z","steps":["trace[1818633723] 'process raft request'  (duration: 193.867924ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T23:53:31.806113Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.418678ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T23:53:31.806220Z","caller":"traceutil/trace.go:172","msg":"trace[425644245] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:756; }","duration":"141.539345ms","start":"2025-09-26T23:53:31.664664Z","end":"2025-09-26T23:53:31.806203Z","steps":["trace[425644245] 'range keys from in-memory index tree'  (duration: 141.254113ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:53:50.726922Z","caller":"traceutil/trace.go:172","msg":"trace[2005300218] transaction","detail":"{read_only:false; response_revision:782; number_of_response:1; }","duration":"102.646556ms","start":"2025-09-26T23:53:50.624257Z","end":"2025-09-26T23:53:50.726904Z","steps":["trace[2005300218] 'process raft request'  (duration: 102.529043ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-27T00:02:36.684162Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1065}
	{"level":"info","ts":"2025-09-27T00:02:36.711104Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1065,"took":"26.473019ms","hash":1946232461,"current-db-size-bytes":3321856,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1421312,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-09-27T00:02:36.711819Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1946232461,"revision":1065,"compact-revision":-1}
	{"level":"info","ts":"2025-09-27T00:07:36.692592Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1348}
	{"level":"info","ts":"2025-09-27T00:07:36.696971Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1348,"took":"3.550269ms","hash":1399235855,"current-db-size-bytes":3321856,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1871872,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-09-27T00:07:36.697011Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1399235855,"revision":1348,"compact-revision":1065}
	
	
	==> kernel <==
	 00:11:01 up 18 min,  0 users,  load average: 0.46, 0.33, 0.25
	Linux embed-certs-994238 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [eeb206142ca732fa558448af09c2ae58e659129be2cbc504fe86e8157c2dd6a7] <==
	I0927 00:07:40.229314       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 00:07:46.505820       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0927 00:08:23.400167       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0927 00:08:40.228544       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 00:08:40.228873       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0927 00:08:40.228955       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0927 00:08:40.229749       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 00:08:40.229837       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0927 00:08:40.230993       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 00:08:58.595881       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0927 00:09:40.966413       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0927 00:10:05.493099       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0927 00:10:40.229639       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 00:10:40.229837       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0927 00:10:40.229861       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0927 00:10:40.231926       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 00:10:40.231974       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0927 00:10:40.231991       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 00:10:51.036352       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [2064521c7e43dcd5a01744b17ef543a0fcea126d5136818f5467db3fd843a708] <==
	I0927 00:04:44.071635       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0927 00:05:13.873954       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 00:05:14.079576       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0927 00:05:43.879327       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 00:05:44.090305       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0927 00:06:13.885120       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 00:06:14.099011       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0927 00:06:43.893550       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 00:06:44.108636       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0927 00:07:13.901420       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 00:07:14.118378       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0927 00:07:43.908529       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 00:07:44.128033       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0927 00:08:13.916108       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 00:08:14.136329       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0927 00:08:43.921818       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 00:08:44.146919       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0927 00:09:13.927774       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 00:09:14.157566       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0927 00:09:43.933686       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 00:09:44.165427       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0927 00:10:13.939604       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 00:10:14.175353       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0927 00:10:43.944926       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0927 00:10:44.184802       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [4110a56d3c6a02ad2751fa18e910a6697249fe651168060c235f3e1b24104746] <==
	I0926 23:52:40.606559       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 23:52:40.707795       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 23:52:40.707840       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.66"]
	E0926 23:52:40.707947       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 23:52:40.754693       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0926 23:52:40.754740       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0926 23:52:40.754767       1 server_linux.go:132] "Using iptables Proxier"
	I0926 23:52:40.766680       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 23:52:40.767204       1 server.go:527] "Version info" version="v1.34.0"
	I0926 23:52:40.767266       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 23:52:40.773886       1 config.go:200] "Starting service config controller"
	I0926 23:52:40.773969       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 23:52:40.773999       1 config.go:106] "Starting endpoint slice config controller"
	I0926 23:52:40.774005       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 23:52:40.774032       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 23:52:40.774064       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 23:52:40.775177       1 config.go:309] "Starting node config controller"
	I0926 23:52:40.775220       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 23:52:40.775231       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 23:52:40.875125       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0926 23:52:40.875216       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 23:52:40.875268       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c8dd25e029012d2f44f7e2da77fdb0b8662789df2d0a107ad874d2db1abec8b4] <==
	I0926 23:52:36.618741       1 serving.go:386] Generated self-signed cert in-memory
	W0926 23:52:39.183358       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0926 23:52:39.183560       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0926 23:52:39.183594       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0926 23:52:39.183674       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0926 23:52:39.238696       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0926 23:52:39.238745       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 23:52:39.242687       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0926 23:52:39.243025       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0926 23:52:39.243652       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 23:52:39.243749       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 23:52:39.344350       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 27 00:10:14 embed-certs-994238 kubelet[1219]: E0927 00:10:14.117905    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758931814117284507  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 27 00:10:14 embed-certs-994238 kubelet[1219]: E0927 00:10:14.808256    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nr4tj" podUID="d537925e-684b-44b3-b200-4a721ee32ca7"
	Sep 27 00:10:14 embed-certs-994238 kubelet[1219]: E0927 00:10:14.808593    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9wwwt" podUID="765bffdb-42c1-4742-b6f6-448a5ca12c32"
	Sep 27 00:10:16 embed-certs-994238 kubelet[1219]: I0927 00:10:16.807725    1219 scope.go:117] "RemoveContainer" containerID="016e883d41d0bbae219c11eda572dba5e41c6cd53f9c78e28db481fef48f5fbf"
	Sep 27 00:10:16 embed-certs-994238 kubelet[1219]: E0927 00:10:16.807991    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6kgrc_kubernetes-dashboard(ecd98bba-a3d7-4bea-aa51-e341fb975527)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6kgrc" podUID="ecd98bba-a3d7-4bea-aa51-e341fb975527"
	Sep 27 00:10:24 embed-certs-994238 kubelet[1219]: E0927 00:10:24.120941    1219 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758931824119918825  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 27 00:10:24 embed-certs-994238 kubelet[1219]: E0927 00:10:24.120975    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758931824119918825  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 27 00:10:27 embed-certs-994238 kubelet[1219]: I0927 00:10:27.807739    1219 scope.go:117] "RemoveContainer" containerID="016e883d41d0bbae219c11eda572dba5e41c6cd53f9c78e28db481fef48f5fbf"
	Sep 27 00:10:27 embed-certs-994238 kubelet[1219]: E0927 00:10:27.807944    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6kgrc_kubernetes-dashboard(ecd98bba-a3d7-4bea-aa51-e341fb975527)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6kgrc" podUID="ecd98bba-a3d7-4bea-aa51-e341fb975527"
	Sep 27 00:10:28 embed-certs-994238 kubelet[1219]: E0927 00:10:28.808559    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9wwwt" podUID="765bffdb-42c1-4742-b6f6-448a5ca12c32"
	Sep 27 00:10:29 embed-certs-994238 kubelet[1219]: E0927 00:10:29.808983    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nr4tj" podUID="d537925e-684b-44b3-b200-4a721ee32ca7"
	Sep 27 00:10:34 embed-certs-994238 kubelet[1219]: E0927 00:10:34.122373    1219 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758931834121996555  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 27 00:10:34 embed-certs-994238 kubelet[1219]: E0927 00:10:34.122402    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758931834121996555  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 27 00:10:40 embed-certs-994238 kubelet[1219]: E0927 00:10:40.808781    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nr4tj" podUID="d537925e-684b-44b3-b200-4a721ee32ca7"
	Sep 27 00:10:42 embed-certs-994238 kubelet[1219]: I0927 00:10:42.807390    1219 scope.go:117] "RemoveContainer" containerID="016e883d41d0bbae219c11eda572dba5e41c6cd53f9c78e28db481fef48f5fbf"
	Sep 27 00:10:42 embed-certs-994238 kubelet[1219]: E0927 00:10:42.807738    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6kgrc_kubernetes-dashboard(ecd98bba-a3d7-4bea-aa51-e341fb975527)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6kgrc" podUID="ecd98bba-a3d7-4bea-aa51-e341fb975527"
	Sep 27 00:10:42 embed-certs-994238 kubelet[1219]: E0927 00:10:42.809715    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9wwwt" podUID="765bffdb-42c1-4742-b6f6-448a5ca12c32"
	Sep 27 00:10:44 embed-certs-994238 kubelet[1219]: E0927 00:10:44.124560    1219 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758931844124190983  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 27 00:10:44 embed-certs-994238 kubelet[1219]: E0927 00:10:44.124611    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758931844124190983  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 27 00:10:51 embed-certs-994238 kubelet[1219]: E0927 00:10:51.810008    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nr4tj" podUID="d537925e-684b-44b3-b200-4a721ee32ca7"
	Sep 27 00:10:54 embed-certs-994238 kubelet[1219]: E0927 00:10:54.126719    1219 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758931854125333432  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 27 00:10:54 embed-certs-994238 kubelet[1219]: E0927 00:10:54.126756    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758931854125333432  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 27 00:10:56 embed-certs-994238 kubelet[1219]: I0927 00:10:56.807173    1219 scope.go:117] "RemoveContainer" containerID="016e883d41d0bbae219c11eda572dba5e41c6cd53f9c78e28db481fef48f5fbf"
	Sep 27 00:10:56 embed-certs-994238 kubelet[1219]: E0927 00:10:56.807344    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6kgrc_kubernetes-dashboard(ecd98bba-a3d7-4bea-aa51-e341fb975527)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6kgrc" podUID="ecd98bba-a3d7-4bea-aa51-e341fb975527"
	Sep 27 00:10:57 embed-certs-994238 kubelet[1219]: E0927 00:10:57.809296    1219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9wwwt" podUID="765bffdb-42c1-4742-b6f6-448a5ca12c32"
	
	
	==> storage-provisioner [67eb663ec36d3cde173a9100ade61abd7d779cf06b880e3cacaa69dd25c4dcb2] <==
	W0927 00:10:36.886412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:38.890822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:38.898203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:40.903342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:40.911350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:42.916604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:42.923545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:44.927318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:44.932789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:46.937333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:46.943245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:48.946599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:48.956172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:50.959313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:50.965182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:52.969057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:52.978687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:54.982024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:54.987400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:56.991148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:57.001162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:59.005424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:10:59.011930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:11:01.017620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0927 00:11:01.025388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [7c257886ddfab73c46628c5c6cdb1271b5c9afb3639019fbefe26c7af272f819] <==
	I0926 23:52:40.535748       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0926 23:53:10.539307       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-994238 -n embed-certs-994238
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-994238 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-nr4tj kubernetes-dashboard-855c9754f9-9wwwt
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-994238 describe pod metrics-server-746fcd58dc-nr4tj kubernetes-dashboard-855c9754f9-9wwwt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-994238 describe pod metrics-server-746fcd58dc-nr4tj kubernetes-dashboard-855c9754f9-9wwwt: exit status 1 (61.994949ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-nr4tj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-9wwwt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-994238 describe pod metrics-server-746fcd58dc-nr4tj kubernetes-dashboard-855c9754f9-9wwwt: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.75s)

                                                
                                    

Test pass (270/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.11
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.14
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 4.17
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.14
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.65
22 TestOffline 106.43
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 145.24
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 8.55
35 TestAddons/parallel/Registry 16.59
36 TestAddons/parallel/RegistryCreds 0.69
38 TestAddons/parallel/InspektorGadget 5.34
39 TestAddons/parallel/MetricsServer 6.47
42 TestAddons/parallel/Headlamp 23.17
43 TestAddons/parallel/CloudSpanner 5.75
45 TestAddons/parallel/NvidiaDevicePlugin 6.62
46 TestAddons/parallel/Yakd 11.27
48 TestAddons/StoppedEnableDisable 84.2
49 TestCertOptions 84.35
50 TestCertExpiration 293.9
52 TestForceSystemdFlag 68.3
53 TestForceSystemdEnv 55.34
55 TestKVMDriverInstallOrUpdate 0.54
59 TestErrorSpam/setup 41.68
60 TestErrorSpam/start 0.34
61 TestErrorSpam/status 0.8
62 TestErrorSpam/pause 1.77
63 TestErrorSpam/unpause 1.96
64 TestErrorSpam/stop 81.09
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 81.93
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 41.96
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.11
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.28
76 TestFunctional/serial/CacheCmd/cache/add_local 1.18
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.77
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 40.12
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.59
87 TestFunctional/serial/LogsFileCmd 1.63
88 TestFunctional/serial/InvalidService 4.51
90 TestFunctional/parallel/ConfigCmd 0.35
92 TestFunctional/parallel/DryRun 0.26
93 TestFunctional/parallel/InternationalLanguage 0.13
94 TestFunctional/parallel/StatusCmd 0.78
99 TestFunctional/parallel/AddonsCmd 0.14
102 TestFunctional/parallel/SSHCmd 0.42
103 TestFunctional/parallel/CpCmd 1.46
104 TestFunctional/parallel/MySQL 21.91
105 TestFunctional/parallel/FileSync 0.24
106 TestFunctional/parallel/CertSync 1.48
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.4
114 TestFunctional/parallel/License 0.25
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
129 TestFunctional/parallel/ProfileCmd/profile_list 0.33
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
131 TestFunctional/parallel/MountCmd/any-port 115.65
132 TestFunctional/parallel/MountCmd/specific-port 1.75
133 TestFunctional/parallel/MountCmd/VerifyCleanup 1.48
134 TestFunctional/parallel/Version/short 0.05
135 TestFunctional/parallel/Version/components 0.48
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.16
141 TestFunctional/parallel/ImageCommands/Setup 0.41
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.4
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.04
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.83
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
149 TestFunctional/parallel/ServiceCmd/List 1.28
150 TestFunctional/parallel/ServiceCmd/JSONOutput 1.27
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 201.9
162 TestMultiControlPlane/serial/DeployApp 6.39
163 TestMultiControlPlane/serial/PingHostFromPods 1.27
164 TestMultiControlPlane/serial/AddWorkerNode 44.85
165 TestMultiControlPlane/serial/NodeLabels 0.08
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.93
167 TestMultiControlPlane/serial/CopyFile 13.64
168 TestMultiControlPlane/serial/StopSecondaryNode 85.87
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
170 TestMultiControlPlane/serial/RestartSecondaryNode 34.84
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.96
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 370.3
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.48
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
175 TestMultiControlPlane/serial/StopCluster 268.34
176 TestMultiControlPlane/serial/RestartCluster 96.1
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
178 TestMultiControlPlane/serial/AddSecondaryNode 77.23
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.92
183 TestJSONOutput/start/Command 82.84
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.79
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.71
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 8.54
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 88.39
215 TestMountStart/serial/StartWithMountFirst 23.76
216 TestMountStart/serial/VerifyMountFirst 0.38
217 TestMountStart/serial/StartWithMountSecond 24.05
218 TestMountStart/serial/VerifyMountSecond 0.38
219 TestMountStart/serial/DeleteFirst 0.73
220 TestMountStart/serial/VerifyMountPostDelete 0.38
221 TestMountStart/serial/Stop 1.35
222 TestMountStart/serial/RestartStopped 20.48
223 TestMountStart/serial/VerifyMountPostStop 0.37
226 TestMultiNode/serial/FreshStart2Nodes 101.03
227 TestMultiNode/serial/DeployApp2Nodes 4.85
228 TestMultiNode/serial/PingHostFrom2Pods 0.81
229 TestMultiNode/serial/AddNode 41.97
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.63
232 TestMultiNode/serial/CopyFile 7.39
233 TestMultiNode/serial/StopNode 2.52
234 TestMultiNode/serial/StartAfterStop 42.13
235 TestMultiNode/serial/RestartKeepsNodes 303.04
236 TestMultiNode/serial/DeleteNode 2.78
237 TestMultiNode/serial/StopMultiNode 171.93
238 TestMultiNode/serial/RestartMultiNode 87.71
239 TestMultiNode/serial/ValidateNameConflict 42.19
246 TestScheduledStopUnix 111.4
250 TestRunningBinaryUpgrade 161.76
252 TestKubernetesUpgrade 199.89
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
256 TestNoKubernetes/serial/StartWithK8s 86.88
257 TestNoKubernetes/serial/StartWithStopK8s 28.43
266 TestPause/serial/Start 91.78
267 TestNoKubernetes/serial/Start 49.19
268 TestStoppedBinaryUpgrade/Setup 0.38
269 TestStoppedBinaryUpgrade/Upgrade 105.84
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
271 TestNoKubernetes/serial/ProfileList 1.29
272 TestNoKubernetes/serial/Stop 1.34
273 TestNoKubernetes/serial/StartNoArgs 39.08
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
283 TestNetworkPlugins/group/false 3.72
287 TestStoppedBinaryUpgrade/MinikubeLogs 1.02
289 TestStartStop/group/old-k8s-version/serial/FirstStart 103.2
291 TestStartStop/group/no-preload/serial/FirstStart 119.57
293 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 101.93
294 TestStartStop/group/old-k8s-version/serial/DeployApp 10.38
295 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.8
296 TestStartStop/group/old-k8s-version/serial/Stop 85.61
297 TestStartStop/group/no-preload/serial/DeployApp 9.32
298 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.29
299 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.09
300 TestStartStop/group/no-preload/serial/Stop 84.03
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 74.15
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
304 TestStartStop/group/old-k8s-version/serial/SecondStart 48.18
306 TestStartStop/group/newest-cni/serial/FirstStart 49.37
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 13.01
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
309 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.84
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
311 TestStartStop/group/no-preload/serial/SecondStart 85.71
312 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
313 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
314 TestStartStop/group/old-k8s-version/serial/Pause 3.41
316 TestStartStop/group/embed-certs/serial/FirstStart 119.67
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.26
319 TestStartStop/group/newest-cni/serial/Stop 9.01
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
321 TestStartStop/group/newest-cni/serial/SecondStart 60.73
322 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 8.01
323 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
324 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
325 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.97
326 TestNetworkPlugins/group/auto/Start 101.54
327 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
328 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
329 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
330 TestStartStop/group/no-preload/serial/Pause 3.32
331 TestNetworkPlugins/group/kindnet/Start 67.37
332 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
335 TestStartStop/group/newest-cni/serial/Pause 4.54
336 TestNetworkPlugins/group/calico/Start 90.33
337 TestStartStop/group/embed-certs/serial/DeployApp 9.36
338 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.29
339 TestStartStop/group/embed-certs/serial/Stop 85.25
340 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
341 TestNetworkPlugins/group/auto/KubeletFlags 0.29
342 TestNetworkPlugins/group/auto/NetCatPod 10.49
343 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
344 TestNetworkPlugins/group/kindnet/NetCatPod 11.28
345 TestNetworkPlugins/group/auto/DNS 0.16
346 TestNetworkPlugins/group/auto/Localhost 0.15
347 TestNetworkPlugins/group/auto/HairPin 0.14
348 TestNetworkPlugins/group/kindnet/DNS 0.18
349 TestNetworkPlugins/group/kindnet/Localhost 0.14
350 TestNetworkPlugins/group/kindnet/HairPin 0.15
351 TestNetworkPlugins/group/calico/ControllerPod 6.01
352 TestNetworkPlugins/group/custom-flannel/Start 69.64
353 TestNetworkPlugins/group/enable-default-cni/Start 108.15
354 TestNetworkPlugins/group/calico/KubeletFlags 0.23
355 TestNetworkPlugins/group/calico/NetCatPod 11.26
356 TestNetworkPlugins/group/calico/DNS 0.15
357 TestNetworkPlugins/group/calico/Localhost 0.14
358 TestNetworkPlugins/group/calico/HairPin 0.12
359 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.67
360 TestStartStop/group/embed-certs/serial/SecondStart 63.16
361 TestNetworkPlugins/group/flannel/Start 102.39
362 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
363 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.3
364 TestNetworkPlugins/group/custom-flannel/DNS 0.19
365 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
366 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
368 TestNetworkPlugins/group/bridge/Start 88.67
369 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
370 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.36
371 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
372 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
373 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
374 TestNetworkPlugins/group/flannel/ControllerPod 6.01
375 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
376 TestNetworkPlugins/group/flannel/NetCatPod 10.25
377 TestNetworkPlugins/group/flannel/DNS 0.16
378 TestNetworkPlugins/group/flannel/Localhost 0.13
379 TestNetworkPlugins/group/flannel/HairPin 0.12
380 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
381 TestNetworkPlugins/group/bridge/NetCatPod 11.27
382 TestNetworkPlugins/group/bridge/DNS 0.15
383 TestNetworkPlugins/group/bridge/Localhost 0.12
384 TestNetworkPlugins/group/bridge/HairPin 0.13
386 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
387 TestStartStop/group/embed-certs/serial/Pause 2.88
x
+
TestDownloadOnly/v1.28.0/json-events (7.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-957403 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-957403 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (7.110903261s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0926 22:29:01.316269    9914 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0926 22:29:01.316373    9914 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-957403
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-957403: exit status 85 (58.326673ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-957403 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-957403 │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:28:54
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:28:54.246652    9926 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:28:54.246789    9926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:28:54.246801    9926 out.go:374] Setting ErrFile to fd 2...
	I0926 22:28:54.246807    9926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:28:54.247031    9926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	W0926 22:28:54.247183    9926 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21642-6020/.minikube/config/config.json: open /home/jenkins/minikube-integration/21642-6020/.minikube/config/config.json: no such file or directory
	I0926 22:28:54.247701    9926 out.go:368] Setting JSON to true
	I0926 22:28:54.248644    9926 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":679,"bootTime":1758925055,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:28:54.248732    9926 start.go:140] virtualization: kvm guest
	I0926 22:28:54.251127    9926 out.go:99] [download-only-957403] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0926 22:28:54.251278    9926 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball: no such file or directory
	I0926 22:28:54.251333    9926 notify.go:220] Checking for updates...
	I0926 22:28:54.252555    9926 out.go:171] MINIKUBE_LOCATION=21642
	I0926 22:28:54.253868    9926 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:28:54.255189    9926 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 22:28:54.259534    9926 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 22:28:54.260961    9926 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0926 22:28:54.263388    9926 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0926 22:28:54.263680    9926 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:28:54.784436    9926 out.go:99] Using the kvm2 driver based on user configuration
	I0926 22:28:54.784480    9926 start.go:304] selected driver: kvm2
	I0926 22:28:54.784489    9926 start.go:924] validating driver "kvm2" against <nil>
	I0926 22:28:54.784882    9926 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 22:28:54.785038    9926 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 22:28:54.800548    9926 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 22:28:54.800585    9926 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21642-6020/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0926 22:28:54.814463    9926 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0926 22:28:54.814519    9926 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 22:28:54.815154    9926 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I0926 22:28:54.815333    9926 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0926 22:28:54.815368    9926 cni.go:84] Creating CNI manager for ""
	I0926 22:28:54.815422    9926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0926 22:28:54.815434    9926 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 22:28:54.815502    9926 start.go:348] cluster config:
	{Name:download-only-957403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-957403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:28:54.815724    9926 iso.go:125] acquiring lock: {Name:mk665cb8117fd96bfc46b1e5a29611848cf59d97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 22:28:54.817598    9926 out.go:99] Downloading VM boot image ...
	I0926 22:28:54.817655    9926 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21642-6020/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0926 22:28:57.601744    9926 out.go:99] Starting "download-only-957403" primary control-plane node in "download-only-957403" cluster
	I0926 22:28:57.601770    9926 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0926 22:28:57.620199    9926 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0926 22:28:57.620248    9926 cache.go:58] Caching tarball of preloaded images
	I0926 22:28:57.620429    9926 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0926 22:28:57.622177    9926 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0926 22:28:57.622203    9926 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0926 22:28:57.649480    9926 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-957403 host does not exist
	  To start a cluster, run: "minikube start -p download-only-957403"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-957403
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (4.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-123956 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-123956 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (4.166943056s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (4.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0926 22:29:05.812397    9914 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0926 22:29:05.812448    9914 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-6020/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-123956
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-123956: exit status 85 (56.699643ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-957403 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-957403 │ jenkins │ v1.37.0 │ 26 Sep 25 22:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ delete  │ -p download-only-957403                                                                                                                                                                             │ download-only-957403 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ start   │ -o=json --download-only -p download-only-123956 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-123956 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:29:01
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:29:01.686172   10130 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:29:01.686310   10130 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:01.686318   10130 out.go:374] Setting ErrFile to fd 2...
	I0926 22:29:01.686324   10130 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:01.686525   10130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	I0926 22:29:01.687057   10130 out.go:368] Setting JSON to true
	I0926 22:29:01.687894   10130 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":687,"bootTime":1758925055,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:29:01.687982   10130 start.go:140] virtualization: kvm guest
	I0926 22:29:01.689949   10130 out.go:99] [download-only-123956] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:29:01.690090   10130 notify.go:220] Checking for updates...
	I0926 22:29:01.691509   10130 out.go:171] MINIKUBE_LOCATION=21642
	I0926 22:29:01.693060   10130 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:29:01.694597   10130 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 22:29:01.696245   10130 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 22:29:01.697770   10130 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-123956 host does not exist
	  To start a cluster, run: "minikube start -p download-only-123956"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-123956
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I0926 22:29:06.394252    9914 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-019280 --alsologtostderr --binary-mirror http://127.0.0.1:43721 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-019280" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-019280
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (106.43s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-379226 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-379226 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m45.521254595s)
helpers_test.go:175: Cleaning up "offline-crio-379226" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-379226
--- PASS: TestOffline (106.43s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-330674
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-330674: exit status 85 (50.670013ms)

                                                
                                                
-- stdout --
	* Profile "addons-330674" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-330674"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-330674
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-330674: exit status 85 (51.746429ms)

                                                
                                                
-- stdout --
	* Profile "addons-330674" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-330674"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (145.24s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-330674 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-330674 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m25.236570873s)
--- PASS: TestAddons/Setup (145.24s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-330674 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-330674 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.55s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-330674 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-330674 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [445fcb70-08b0-49c8-b65c-eda21a3d6feb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [445fcb70-08b0-49c8-b65c-eda21a3d6feb] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.006939172s
addons_test.go:694: (dbg) Run:  kubectl --context addons-330674 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-330674 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-330674 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.55s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.596872ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-2t8mg" [c1b89f10-d5b6-445e-b282-034ab8eaa0ba] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.067778595s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-2jz4s" [ad4c665f-afe2-4a63-95bb-447d8efe7a88] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009199579s
addons_test.go:392: (dbg) Run:  kubectl --context addons-330674 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-330674 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-330674 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.527262435s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-330674 ip
2025/09/26 22:32:05 [DEBUG] GET http://192.168.39.36:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-330674 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.59s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.262454ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-330674
addons_test.go:332: (dbg) Run:  kubectl --context addons-330674 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-330674 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-c5fsh" [1d4706ed-d612-42b6-8ce7-1c3b53174964] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007505571s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-330674 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.34s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 8.273452ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-lwlpp" [2b5d3bcf-5ffd-48cc-a6b5-c5c418e1348e] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.069535398s
addons_test.go:463: (dbg) Run:  kubectl --context addons-330674 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-330674 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-330674 addons disable metrics-server --alsologtostderr -v=1: (1.316490048s)
--- PASS: TestAddons/parallel/MetricsServer (6.47s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (23.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-330674 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-g8vq8" [39c174d3-1916-4877-95d0-817a3184da5c] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-g8vq8" [39c174d3-1916-4877-95d0-817a3184da5c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-g8vq8" [39c174d3-1916-4877-95d0-817a3184da5c] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.01856645s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-330674 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-330674 addons disable headlamp --alsologtostderr -v=1: (6.230552688s)
--- PASS: TestAddons/parallel/Headlamp (23.17s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-zxjb7" [e97eba15-8488-4974-8d80-8c23dabfee5b] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.007362494s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-330674 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.75s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-8pbfv" [1929f235-8f94-4b86-ba34-fcdb88f8378b] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00429125s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-330674 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-dzmfc" [26468edc-68b1-416e-ad25-d29a33f6ba0f] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.074600846s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-330674 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-330674 addons disable yakd --alsologtostderr -v=1: (6.19169749s)
--- PASS: TestAddons/parallel/Yakd (11.27s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (84.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-330674
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-330674: (1m23.933615614s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-330674
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-330674
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-330674
--- PASS: TestAddons/StoppedEnableDisable (84.20s)

                                                
                                    
x
+
TestCertOptions (84.35s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-318136 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-318136 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m22.779244873s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-318136 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-318136 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-318136 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-318136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-318136
--- PASS: TestCertOptions (84.35s)

                                                
                                    
x
+
TestCertExpiration (293.9s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-648174 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-648174 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m13.731238479s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-648174 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-648174 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.302233375s)
helpers_test.go:175: Cleaning up "cert-expiration-648174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-648174
--- PASS: TestCertExpiration (293.90s)

                                                
                                    
x
+
TestForceSystemdFlag (68.3s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-489274 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-489274 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m7.1285622s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-489274 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-489274" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-489274
--- PASS: TestForceSystemdFlag (68.30s)

                                                
                                    
x
+
TestForceSystemdEnv (55.34s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-429303 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-429303 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (54.41174438s)
helpers_test.go:175: Cleaning up "force-systemd-env-429303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-429303
--- PASS: TestForceSystemdEnv (55.34s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.54s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0926 23:42:51.112557    9914 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0926 23:42:51.112728    9914 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate4078725382/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0926 23:42:51.147858    9914 install.go:163] /tmp/TestKVMDriverInstallOrUpdate4078725382/001/docker-machine-driver-kvm2 version is 1.1.1
W0926 23:42:51.147912    9914 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W0926 23:42:51.148092    9914 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0926 23:42:51.148169    9914 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4078725382/001/docker-machine-driver-kvm2
I0926 23:42:51.511968    9914 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate4078725382/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0926 23:42:51.529277    9914 install.go:163] /tmp/TestKVMDriverInstallOrUpdate4078725382/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.54s)

                                                
                                    
x
+
TestErrorSpam/setup (41.68s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-756190 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-756190 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-756190 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-756190 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.679451217s)
--- PASS: TestErrorSpam/setup (41.68s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756190 --log_dir /tmp/nospam-756190 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756190 --log_dir /tmp/nospam-756190 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756190 --log_dir /tmp/nospam-756190 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756190 --log_dir /tmp/nospam-756190 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756190 --log_dir /tmp/nospam-756190 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756190 --log_dir /tmp/nospam-756190 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756190 --log_dir /tmp/nospam-756190 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756190 --log_dir /tmp/nospam-756190 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756190 --log_dir /tmp/nospam-756190 pause
--- PASS: TestErrorSpam/pause (1.77s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.96s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756190 --log_dir /tmp/nospam-756190 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756190 --log_dir /tmp/nospam-756190 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756190 --log_dir /tmp/nospam-756190 unpause
--- PASS: TestErrorSpam/unpause (1.96s)

                                                
                                    
x
+
TestErrorSpam/stop (81.09s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756190 --log_dir /tmp/nospam-756190 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-756190 --log_dir /tmp/nospam-756190 stop: (1m17.732917043s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756190 --log_dir /tmp/nospam-756190 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-756190 --log_dir /tmp/nospam-756190 stop: (1.553652969s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-756190 --log_dir /tmp/nospam-756190 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-756190 --log_dir /tmp/nospam-756190 stop: (1.798689674s)
--- PASS: TestErrorSpam/stop (81.09s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21642-6020/.minikube/files/etc/test/nested/copy/9914/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.93s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-615476 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-615476 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m21.928772577s)
--- PASS: TestFunctional/serial/StartWithProxy (81.93s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.96s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0926 22:45:13.866711    9914 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-615476 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-615476 --alsologtostderr -v=8: (41.954316232s)
functional_test.go:678: soft start took 41.955044946s for "functional-615476" cluster.
I0926 22:45:55.821395    9914 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (41.96s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-615476 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-615476 cache add registry.k8s.io/pause:3.1: (1.079697057s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-615476 cache add registry.k8s.io/pause:3.3: (1.115371764s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-615476 cache add registry.k8s.io/pause:latest: (1.085472659s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-615476 /tmp/TestFunctionalserialCacheCmdcacheadd_local2410901668/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 cache add minikube-local-cache-test:functional-615476
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 cache delete minikube-local-cache-test:functional-615476
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-615476
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-615476 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (229.016578ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-615476 cache reload: (1.051098321s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 kubectl -- --context functional-615476 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-615476 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-615476 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0926 22:46:32.971637    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:32.978098    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:32.989492    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:33.010880    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:33.052320    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:33.133788    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:33.295308    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:33.616997    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:34.259037    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:35.540695    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:46:38.103584    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-615476 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.123620362s)
functional_test.go:776: restart took 40.123752079s for "functional-615476" cluster.
I0926 22:46:42.978790    9914 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (40.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-615476 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 logs
E0926 22:46:43.225309    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-615476 logs: (1.588846156s)
--- PASS: TestFunctional/serial/LogsCmd (1.59s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 logs --file /tmp/TestFunctionalserialLogsFileCmd4202436804/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-615476 logs --file /tmp/TestFunctionalserialLogsFileCmd4202436804/001/logs.txt: (1.627394567s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.63s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.51s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-615476 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-615476
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-615476: exit status 115 (305.006703ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.253:31875 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-615476 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.51s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-615476 config get cpus: exit status 14 (56.154283ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-615476 config get cpus: exit status 14 (52.068801ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-615476 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-615476 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (127.963156ms)

                                                
                                                
-- stdout --
	* [functional-615476] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 22:53:09.313395   21054 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:53:09.313641   21054 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:53:09.313654   21054 out.go:374] Setting ErrFile to fd 2...
	I0926 22:53:09.313658   21054 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:53:09.313850   21054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	I0926 22:53:09.314292   21054 out.go:368] Setting JSON to false
	I0926 22:53:09.315169   21054 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2134,"bootTime":1758925055,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:53:09.315259   21054 start.go:140] virtualization: kvm guest
	I0926 22:53:09.317201   21054 out.go:179] * [functional-615476] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:53:09.318420   21054 notify.go:220] Checking for updates...
	I0926 22:53:09.318473   21054 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:53:09.319846   21054 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:53:09.321265   21054 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 22:53:09.322724   21054 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 22:53:09.323790   21054 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:53:09.325019   21054 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:53:09.326620   21054 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:53:09.327084   21054 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:53:09.327142   21054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:53:09.340913   21054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41967
	I0926 22:53:09.341385   21054 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:53:09.341891   21054 main.go:141] libmachine: Using API Version  1
	I0926 22:53:09.341921   21054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:53:09.342263   21054 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:53:09.342499   21054 main.go:141] libmachine: (functional-615476) Calling .DriverName
	I0926 22:53:09.342764   21054 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:53:09.343174   21054 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:53:09.343249   21054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:53:09.357151   21054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42025
	I0926 22:53:09.357629   21054 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:53:09.358120   21054 main.go:141] libmachine: Using API Version  1
	I0926 22:53:09.358143   21054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:53:09.358491   21054 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:53:09.358689   21054 main.go:141] libmachine: (functional-615476) Calling .DriverName
	I0926 22:53:09.391676   21054 out.go:179] * Using the kvm2 driver based on existing profile
	I0926 22:53:09.392970   21054 start.go:304] selected driver: kvm2
	I0926 22:53:09.392986   21054 start.go:924] validating driver "kvm2" against &{Name:functional-615476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:functional-615476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:53:09.393097   21054 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:53:09.395086   21054 out.go:203] 
	W0926 22:53:09.396202   21054 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0926 22:53:09.397310   21054 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-615476 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-615476 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-615476 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (131.356385ms)

                                                
                                                
-- stdout --
	* [functional-615476] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 22:53:09.124037   21016 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:53:09.124140   21016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:53:09.124144   21016 out.go:374] Setting ErrFile to fd 2...
	I0926 22:53:09.124148   21016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:53:09.124429   21016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	I0926 22:53:09.124896   21016 out.go:368] Setting JSON to false
	I0926 22:53:09.125807   21016 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2134,"bootTime":1758925055,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:53:09.125929   21016 start.go:140] virtualization: kvm guest
	I0926 22:53:09.127984   21016 out.go:179] * [functional-615476] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0926 22:53:09.129369   21016 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:53:09.129349   21016 notify.go:220] Checking for updates...
	I0926 22:53:09.131621   21016 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:53:09.132984   21016 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 22:53:09.134383   21016 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 22:53:09.135728   21016 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:53:09.137374   21016 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:53:09.139088   21016 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:53:09.139492   21016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:53:09.139561   21016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:53:09.153464   21016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0926 22:53:09.153965   21016 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:53:09.154481   21016 main.go:141] libmachine: Using API Version  1
	I0926 22:53:09.154513   21016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:53:09.155030   21016 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:53:09.155289   21016 main.go:141] libmachine: (functional-615476) Calling .DriverName
	I0926 22:53:09.155597   21016 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:53:09.155963   21016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 22:53:09.156044   21016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 22:53:09.169650   21016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41109
	I0926 22:53:09.170136   21016 main.go:141] libmachine: () Calling .GetVersion
	I0926 22:53:09.170600   21016 main.go:141] libmachine: Using API Version  1
	I0926 22:53:09.170622   21016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 22:53:09.170965   21016 main.go:141] libmachine: () Calling .GetMachineName
	I0926 22:53:09.171125   21016 main.go:141] libmachine: (functional-615476) Calling .DriverName
	I0926 22:53:09.203040   21016 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I0926 22:53:09.204340   21016 start.go:304] selected driver: kvm2
	I0926 22:53:09.204359   21016 start.go:924] validating driver "kvm2" against &{Name:functional-615476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:functional-615476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:53:09.204478   21016 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:53:09.206521   21016 out.go:203] 
	W0926 22:53:09.207720   21016 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0926 22:53:09.208693   21016 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh -n functional-615476 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 cp functional-615476:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1913802868/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh -n functional-615476 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh -n functional-615476 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-615476 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-9dftf" [e6766175-9c7c-4531-8f20-a12f26e25a36] Pending
helpers_test.go:352: "mysql-5bb876957f-9dftf" [e6766175-9c7c-4531-8f20-a12f26e25a36] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-9dftf" [e6766175-9c7c-4531-8f20-a12f26e25a36] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.003353472s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-615476 exec mysql-5bb876957f-9dftf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-615476 exec mysql-5bb876957f-9dftf -- mysql -ppassword -e "show databases;": exit status 1 (136.053803ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0926 22:47:11.147042    9914 retry.go:31] will retry after 1.404953155s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-615476 exec mysql-5bb876957f-9dftf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.91s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9914/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "sudo cat /etc/test/nested/copy/9914/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9914.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "sudo cat /etc/ssl/certs/9914.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9914.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "sudo cat /usr/share/ca-certificates/9914.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/99142.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "sudo cat /etc/ssl/certs/99142.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/99142.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "sudo cat /usr/share/ca-certificates/99142.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-615476 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-615476 ssh "sudo systemctl is-active docker": exit status 1 (196.859344ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-615476 ssh "sudo systemctl is-active containerd": exit status 1 (198.928599ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "281.263911ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "46.799512ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "286.575785ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "45.388147ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (115.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-615476 /tmp/TestFunctionalparallelMountCmdany-port2791596571/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1758926833697966459" to /tmp/TestFunctionalparallelMountCmdany-port2791596571/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1758926833697966459" to /tmp/TestFunctionalparallelMountCmdany-port2791596571/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1758926833697966459" to /tmp/TestFunctionalparallelMountCmdany-port2791596571/001/test-1758926833697966459
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-615476 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (189.38675ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0926 22:47:13.887654    9914 retry.go:31] will retry after 721.234367ms: exit status 1
E0926 22:47:13.948779    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 26 22:47 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 26 22:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 26 22:47 test-1758926833697966459
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh cat /mount-9p/test-1758926833697966459
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-615476 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [857b4229-9648-4b45-804e-37c86a2a4dc0] Pending
helpers_test.go:352: "busybox-mount" [857b4229-9648-4b45-804e-37c86a2a4dc0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0926 22:47:54.910141    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox-mount" [857b4229-9648-4b45-804e-37c86a2a4dc0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [857b4229-9648-4b45-804e-37c86a2a4dc0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 1m53.004034416s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-615476 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-615476 /tmp/TestFunctionalparallelMountCmdany-port2791596571/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (115.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-615476 /tmp/TestFunctionalparallelMountCmdspecific-port1566913751/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-615476 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (197.927948ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0926 22:49:09.549133    9914 retry.go:31] will retry after 550.835251ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-615476 /tmp/TestFunctionalparallelMountCmdspecific-port1566913751/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-615476 ssh "sudo umount -f /mount-9p": exit status 1 (200.936737ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-615476 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-615476 /tmp/TestFunctionalparallelMountCmdspecific-port1566913751/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-615476 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1522187760/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-615476 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1522187760/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-615476 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1522187760/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-615476 ssh "findmnt -T" /mount1: exit status 1 (241.999435ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0926 22:49:11.345571    9914 retry.go:31] will retry after 618.670638ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-615476 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-615476 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1522187760/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-615476 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1522187760/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-615476 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1522187760/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-615476 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-615476
localhost/kicbase/echo-server:functional-615476
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-615476 image ls --format short --alsologtostderr:
I0926 22:53:10.364290   21233 out.go:360] Setting OutFile to fd 1 ...
I0926 22:53:10.364548   21233 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:10.364557   21233 out.go:374] Setting ErrFile to fd 2...
I0926 22:53:10.364562   21233 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:10.364813   21233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
I0926 22:53:10.365457   21233 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:53:10.365574   21233 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:53:10.365957   21233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0926 22:53:10.366033   21233 main.go:141] libmachine: Launching plugin server for driver kvm2
I0926 22:53:10.379300   21233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43643
I0926 22:53:10.379769   21233 main.go:141] libmachine: () Calling .GetVersion
I0926 22:53:10.380347   21233 main.go:141] libmachine: Using API Version  1
I0926 22:53:10.380370   21233 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 22:53:10.380764   21233 main.go:141] libmachine: () Calling .GetMachineName
I0926 22:53:10.380985   21233 main.go:141] libmachine: (functional-615476) Calling .GetState
I0926 22:53:10.383187   21233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0926 22:53:10.383249   21233 main.go:141] libmachine: Launching plugin server for driver kvm2
I0926 22:53:10.396771   21233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45369
I0926 22:53:10.397406   21233 main.go:141] libmachine: () Calling .GetVersion
I0926 22:53:10.397983   21233 main.go:141] libmachine: Using API Version  1
I0926 22:53:10.398007   21233 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 22:53:10.398378   21233 main.go:141] libmachine: () Calling .GetMachineName
I0926 22:53:10.398588   21233 main.go:141] libmachine: (functional-615476) Calling .DriverName
I0926 22:53:10.398875   21233 ssh_runner.go:195] Run: systemctl --version
I0926 22:53:10.398899   21233 main.go:141] libmachine: (functional-615476) Calling .GetSSHHostname
I0926 22:53:10.402451   21233 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined MAC address 52:54:00:a0:99:7e in network mk-functional-615476
I0926 22:53:10.403038   21233 main.go:141] libmachine: (functional-615476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:7e", ip: ""} in network mk-functional-615476: {Iface:virbr1 ExpiryTime:2025-09-26 23:44:07 +0000 UTC Type:0 Mac:52:54:00:a0:99:7e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:functional-615476 Clientid:01:52:54:00:a0:99:7e}
I0926 22:53:10.403070   21233 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined IP address 192.168.39.253 and MAC address 52:54:00:a0:99:7e in network mk-functional-615476
I0926 22:53:10.403262   21233 main.go:141] libmachine: (functional-615476) Calling .GetSSHPort
I0926 22:53:10.403432   21233 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
I0926 22:53:10.403579   21233 main.go:141] libmachine: (functional-615476) Calling .GetSSHUsername
I0926 22:53:10.403732   21233 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/functional-615476/id_rsa Username:docker}
I0926 22:53:10.485837   21233 ssh_runner.go:195] Run: sudo crictl images --output json
I0926 22:53:10.540914   21233 main.go:141] libmachine: Making call to close driver server
I0926 22:53:10.540933   21233 main.go:141] libmachine: (functional-615476) Calling .Close
I0926 22:53:10.541223   21233 main.go:141] libmachine: Successfully made call to close driver server
I0926 22:53:10.541245   21233 main.go:141] libmachine: Making call to close connection to plugin binary
I0926 22:53:10.541254   21233 main.go:141] libmachine: Making call to close driver server
I0926 22:53:10.541262   21233 main.go:141] libmachine: (functional-615476) Calling .Close
I0926 22:53:10.541263   21233 main.go:141] libmachine: (functional-615476) DBG | Closing plugin on server side
I0926 22:53:10.541530   21233 main.go:141] libmachine: (functional-615476) DBG | Closing plugin on server side
I0926 22:53:10.541583   21233 main.go:141] libmachine: Successfully made call to close driver server
I0926 22:53:10.541601   21233 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-615476 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-615476  │ 77f572b38598b │ 3.33kB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/kicbase/echo-server           │ functional-615476  │ 9056ab77afb8e │ 4.94MB │
│ localhost/my-image                      │ functional-615476  │ 7df7989addedf │ 1.47MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-615476 image ls --format table --alsologtostderr:
I0926 22:53:14.189770   21384 out.go:360] Setting OutFile to fd 1 ...
I0926 22:53:14.190045   21384 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:14.190054   21384 out.go:374] Setting ErrFile to fd 2...
I0926 22:53:14.190058   21384 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:14.190228   21384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
I0926 22:53:14.190753   21384 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:53:14.190859   21384 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:53:14.191207   21384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0926 22:53:14.191273   21384 main.go:141] libmachine: Launching plugin server for driver kvm2
I0926 22:53:14.204557   21384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40325
I0926 22:53:14.205097   21384 main.go:141] libmachine: () Calling .GetVersion
I0926 22:53:14.205618   21384 main.go:141] libmachine: Using API Version  1
I0926 22:53:14.205638   21384 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 22:53:14.206117   21384 main.go:141] libmachine: () Calling .GetMachineName
I0926 22:53:14.206319   21384 main.go:141] libmachine: (functional-615476) Calling .GetState
I0926 22:53:14.208652   21384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0926 22:53:14.208702   21384 main.go:141] libmachine: Launching plugin server for driver kvm2
I0926 22:53:14.222013   21384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46401
I0926 22:53:14.222476   21384 main.go:141] libmachine: () Calling .GetVersion
I0926 22:53:14.222934   21384 main.go:141] libmachine: Using API Version  1
I0926 22:53:14.222953   21384 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 22:53:14.223380   21384 main.go:141] libmachine: () Calling .GetMachineName
I0926 22:53:14.223595   21384 main.go:141] libmachine: (functional-615476) Calling .DriverName
I0926 22:53:14.223843   21384 ssh_runner.go:195] Run: systemctl --version
I0926 22:53:14.223884   21384 main.go:141] libmachine: (functional-615476) Calling .GetSSHHostname
I0926 22:53:14.227078   21384 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined MAC address 52:54:00:a0:99:7e in network mk-functional-615476
I0926 22:53:14.227464   21384 main.go:141] libmachine: (functional-615476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:7e", ip: ""} in network mk-functional-615476: {Iface:virbr1 ExpiryTime:2025-09-26 23:44:07 +0000 UTC Type:0 Mac:52:54:00:a0:99:7e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:functional-615476 Clientid:01:52:54:00:a0:99:7e}
I0926 22:53:14.227499   21384 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined IP address 192.168.39.253 and MAC address 52:54:00:a0:99:7e in network mk-functional-615476
I0926 22:53:14.227696   21384 main.go:141] libmachine: (functional-615476) Calling .GetSSHPort
I0926 22:53:14.227880   21384 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
I0926 22:53:14.228056   21384 main.go:141] libmachine: (functional-615476) Calling .GetSSHUsername
I0926 22:53:14.228193   21384 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/functional-615476/id_rsa Username:docker}
I0926 22:53:14.313056   21384 ssh_runner.go:195] Run: sudo crictl images --output json
I0926 22:53:14.359546   21384 main.go:141] libmachine: Making call to close driver server
I0926 22:53:14.359566   21384 main.go:141] libmachine: (functional-615476) Calling .Close
I0926 22:53:14.359873   21384 main.go:141] libmachine: Successfully made call to close driver server
I0926 22:53:14.359894   21384 main.go:141] libmachine: Making call to close connection to plugin binary
I0926 22:53:14.359908   21384 main.go:141] libmachine: Making call to close driver server
I0926 22:53:14.359909   21384 main.go:141] libmachine: (functional-615476) DBG | Closing plugin on server side
I0926 22:53:14.359914   21384 main.go:141] libmachine: (functional-615476) Calling .Close
I0926 22:53:14.360227   21384 main.go:141] libmachine: Successfully made call to close driver server
I0926 22:53:14.360243   21384 main.go:141] libmachine: Making call to close connection to plugin binary
I0926 22:53:14.360255   21384 main.go:141] libmachine: (functional-615476) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-615476 image ls --format json --alsologtostderr:
[{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"300903dc6db00f6d063584252919e01b7523321bfc1eb4487814c5fd4b13c9
fa","repoDigests":["docker.io/library/9233acb425f20383f46081d66c3f1c9b55e90b363fe90fd528b7969ccaa7796f-tmp@sha256:65a5c61b9592e0f2b42eccb2a4cd93c4dce2da77c3e739f264b38394971336a8"],"repoTags":[],"size":"1466018"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-615476"],"size":"4943877"},{"id":"7df7989addedf3a8bb07a2203e7f69eb815733d4064478c787d41606590f1476","repoDigests":["localhost/my-image@sha256:347171b54071cd6d33d9adbe5cea
65124f2d95356eeeb55b1b173787c122a1b4"],"repoTags":["localhost/my-image:functional-615476"],"size":"1468600"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["reg
istry.k8s.io/pause:latest"],"size":"247077"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-miniku
be/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:49
5d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"77f572b38598bf7da1c7c9f8cffcc395f8aa7aa3cb1c8f044fa6e740fc32b777","repoDigests":["localhost/minikube-local-cache-test@sha256:36d2ce1fe2115eb0b404bb40ccffc5345c3de65068434e7d19cea95fdb
740d53"],"repoTags":["localhost/minikube-local-cache-test:functional-615476"],"size":"3330"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-615476 image ls --format json --alsologtostderr:
I0926 22:53:13.973054   21359 out.go:360] Setting OutFile to fd 1 ...
I0926 22:53:13.973152   21359 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:13.973160   21359 out.go:374] Setting ErrFile to fd 2...
I0926 22:53:13.973164   21359 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:13.973358   21359 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
I0926 22:53:13.973923   21359 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:53:13.974026   21359 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:53:13.974366   21359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0926 22:53:13.974423   21359 main.go:141] libmachine: Launching plugin server for driver kvm2
I0926 22:53:13.987937   21359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46453
I0926 22:53:13.988422   21359 main.go:141] libmachine: () Calling .GetVersion
I0926 22:53:13.988919   21359 main.go:141] libmachine: Using API Version  1
I0926 22:53:13.988973   21359 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 22:53:13.989336   21359 main.go:141] libmachine: () Calling .GetMachineName
I0926 22:53:13.989537   21359 main.go:141] libmachine: (functional-615476) Calling .GetState
I0926 22:53:13.991889   21359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0926 22:53:13.991941   21359 main.go:141] libmachine: Launching plugin server for driver kvm2
I0926 22:53:14.006243   21359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
I0926 22:53:14.006687   21359 main.go:141] libmachine: () Calling .GetVersion
I0926 22:53:14.007177   21359 main.go:141] libmachine: Using API Version  1
I0926 22:53:14.007210   21359 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 22:53:14.007541   21359 main.go:141] libmachine: () Calling .GetMachineName
I0926 22:53:14.007717   21359 main.go:141] libmachine: (functional-615476) Calling .DriverName
I0926 22:53:14.007891   21359 ssh_runner.go:195] Run: systemctl --version
I0926 22:53:14.007916   21359 main.go:141] libmachine: (functional-615476) Calling .GetSSHHostname
I0926 22:53:14.010704   21359 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined MAC address 52:54:00:a0:99:7e in network mk-functional-615476
I0926 22:53:14.011134   21359 main.go:141] libmachine: (functional-615476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:7e", ip: ""} in network mk-functional-615476: {Iface:virbr1 ExpiryTime:2025-09-26 23:44:07 +0000 UTC Type:0 Mac:52:54:00:a0:99:7e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:functional-615476 Clientid:01:52:54:00:a0:99:7e}
I0926 22:53:14.011160   21359 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined IP address 192.168.39.253 and MAC address 52:54:00:a0:99:7e in network mk-functional-615476
I0926 22:53:14.011311   21359 main.go:141] libmachine: (functional-615476) Calling .GetSSHPort
I0926 22:53:14.011483   21359 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
I0926 22:53:14.011658   21359 main.go:141] libmachine: (functional-615476) Calling .GetSSHUsername
I0926 22:53:14.011811   21359 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/functional-615476/id_rsa Username:docker}
I0926 22:53:14.094727   21359 ssh_runner.go:195] Run: sudo crictl images --output json
I0926 22:53:14.139563   21359 main.go:141] libmachine: Making call to close driver server
I0926 22:53:14.139576   21359 main.go:141] libmachine: (functional-615476) Calling .Close
I0926 22:53:14.139898   21359 main.go:141] libmachine: Successfully made call to close driver server
I0926 22:53:14.139915   21359 main.go:141] libmachine: Making call to close connection to plugin binary
I0926 22:53:14.139929   21359 main.go:141] libmachine: Making call to close driver server
I0926 22:53:14.139935   21359 main.go:141] libmachine: (functional-615476) Calling .Close
I0926 22:53:14.139944   21359 main.go:141] libmachine: (functional-615476) DBG | Closing plugin on server side
I0926 22:53:14.140194   21359 main.go:141] libmachine: Successfully made call to close driver server
I0926 22:53:14.140206   21359 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-615476 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-615476
size: "4943877"
- id: 77f572b38598bf7da1c7c9f8cffcc395f8aa7aa3cb1c8f044fa6e740fc32b777
repoDigests:
- localhost/minikube-local-cache-test@sha256:36d2ce1fe2115eb0b404bb40ccffc5345c3de65068434e7d19cea95fdb740d53
repoTags:
- localhost/minikube-local-cache-test:functional-615476
size: "3330"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-615476 image ls --format yaml --alsologtostderr:
I0926 22:53:10.593601   21257 out.go:360] Setting OutFile to fd 1 ...
I0926 22:53:10.593724   21257 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:10.593735   21257 out.go:374] Setting ErrFile to fd 2...
I0926 22:53:10.593741   21257 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:10.593953   21257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
I0926 22:53:10.594514   21257 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:53:10.594640   21257 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:53:10.595029   21257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0926 22:53:10.595105   21257 main.go:141] libmachine: Launching plugin server for driver kvm2
I0926 22:53:10.608865   21257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38665
I0926 22:53:10.609405   21257 main.go:141] libmachine: () Calling .GetVersion
I0926 22:53:10.609974   21257 main.go:141] libmachine: Using API Version  1
I0926 22:53:10.609994   21257 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 22:53:10.610318   21257 main.go:141] libmachine: () Calling .GetMachineName
I0926 22:53:10.610505   21257 main.go:141] libmachine: (functional-615476) Calling .GetState
I0926 22:53:10.612429   21257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0926 22:53:10.612466   21257 main.go:141] libmachine: Launching plugin server for driver kvm2
I0926 22:53:10.625663   21257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45845
I0926 22:53:10.626203   21257 main.go:141] libmachine: () Calling .GetVersion
I0926 22:53:10.626702   21257 main.go:141] libmachine: Using API Version  1
I0926 22:53:10.626726   21257 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 22:53:10.627066   21257 main.go:141] libmachine: () Calling .GetMachineName
I0926 22:53:10.627268   21257 main.go:141] libmachine: (functional-615476) Calling .DriverName
I0926 22:53:10.627472   21257 ssh_runner.go:195] Run: systemctl --version
I0926 22:53:10.627496   21257 main.go:141] libmachine: (functional-615476) Calling .GetSSHHostname
I0926 22:53:10.630849   21257 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined MAC address 52:54:00:a0:99:7e in network mk-functional-615476
I0926 22:53:10.631356   21257 main.go:141] libmachine: (functional-615476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:7e", ip: ""} in network mk-functional-615476: {Iface:virbr1 ExpiryTime:2025-09-26 23:44:07 +0000 UTC Type:0 Mac:52:54:00:a0:99:7e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:functional-615476 Clientid:01:52:54:00:a0:99:7e}
I0926 22:53:10.631390   21257 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined IP address 192.168.39.253 and MAC address 52:54:00:a0:99:7e in network mk-functional-615476
I0926 22:53:10.631605   21257 main.go:141] libmachine: (functional-615476) Calling .GetSSHPort
I0926 22:53:10.631802   21257 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
I0926 22:53:10.631974   21257 main.go:141] libmachine: (functional-615476) Calling .GetSSHUsername
I0926 22:53:10.632115   21257 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/functional-615476/id_rsa Username:docker}
I0926 22:53:10.717891   21257 ssh_runner.go:195] Run: sudo crictl images --output json
I0926 22:53:10.761485   21257 main.go:141] libmachine: Making call to close driver server
I0926 22:53:10.761507   21257 main.go:141] libmachine: (functional-615476) Calling .Close
I0926 22:53:10.761848   21257 main.go:141] libmachine: Successfully made call to close driver server
I0926 22:53:10.761867   21257 main.go:141] libmachine: Making call to close connection to plugin binary
I0926 22:53:10.761881   21257 main.go:141] libmachine: Making call to close driver server
I0926 22:53:10.761889   21257 main.go:141] libmachine: (functional-615476) Calling .Close
I0926 22:53:10.762115   21257 main.go:141] libmachine: Successfully made call to close driver server
I0926 22:53:10.762138   21257 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-615476 ssh pgrep buildkitd: exit status 1 (193.253883ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 image build -t localhost/my-image:functional-615476 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-615476 image build -t localhost/my-image:functional-615476 testdata/build --alsologtostderr: (2.744919717s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-615476 image build -t localhost/my-image:functional-615476 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 300903dc6db
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-615476
--> 7df7989adde
Successfully tagged localhost/my-image:functional-615476
7df7989addedf3a8bb07a2203e7f69eb815733d4064478c787d41606590f1476
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-615476 image build -t localhost/my-image:functional-615476 testdata/build --alsologtostderr:
I0926 22:53:11.005020   21311 out.go:360] Setting OutFile to fd 1 ...
I0926 22:53:11.005199   21311 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:11.005210   21311 out.go:374] Setting ErrFile to fd 2...
I0926 22:53:11.005214   21311 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:53:11.005400   21311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
I0926 22:53:11.006066   21311 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:53:11.006703   21311 config.go:182] Loaded profile config "functional-615476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:53:11.007114   21311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0926 22:53:11.007170   21311 main.go:141] libmachine: Launching plugin server for driver kvm2
I0926 22:53:11.021141   21311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44203
I0926 22:53:11.021598   21311 main.go:141] libmachine: () Calling .GetVersion
I0926 22:53:11.022305   21311 main.go:141] libmachine: Using API Version  1
I0926 22:53:11.022329   21311 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 22:53:11.022708   21311 main.go:141] libmachine: () Calling .GetMachineName
I0926 22:53:11.022913   21311 main.go:141] libmachine: (functional-615476) Calling .GetState
I0926 22:53:11.025227   21311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0926 22:53:11.025296   21311 main.go:141] libmachine: Launching plugin server for driver kvm2
I0926 22:53:11.040621   21311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38397
I0926 22:53:11.041254   21311 main.go:141] libmachine: () Calling .GetVersion
I0926 22:53:11.041837   21311 main.go:141] libmachine: Using API Version  1
I0926 22:53:11.041876   21311 main.go:141] libmachine: () Calling .SetConfigRaw
I0926 22:53:11.042231   21311 main.go:141] libmachine: () Calling .GetMachineName
I0926 22:53:11.042403   21311 main.go:141] libmachine: (functional-615476) Calling .DriverName
I0926 22:53:11.042590   21311 ssh_runner.go:195] Run: systemctl --version
I0926 22:53:11.042626   21311 main.go:141] libmachine: (functional-615476) Calling .GetSSHHostname
I0926 22:53:11.045897   21311 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined MAC address 52:54:00:a0:99:7e in network mk-functional-615476
I0926 22:53:11.046379   21311 main.go:141] libmachine: (functional-615476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:99:7e", ip: ""} in network mk-functional-615476: {Iface:virbr1 ExpiryTime:2025-09-26 23:44:07 +0000 UTC Type:0 Mac:52:54:00:a0:99:7e Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:functional-615476 Clientid:01:52:54:00:a0:99:7e}
I0926 22:53:11.046409   21311 main.go:141] libmachine: (functional-615476) DBG | domain functional-615476 has defined IP address 192.168.39.253 and MAC address 52:54:00:a0:99:7e in network mk-functional-615476
I0926 22:53:11.046590   21311 main.go:141] libmachine: (functional-615476) Calling .GetSSHPort
I0926 22:53:11.046793   21311 main.go:141] libmachine: (functional-615476) Calling .GetSSHKeyPath
I0926 22:53:11.046982   21311 main.go:141] libmachine: (functional-615476) Calling .GetSSHUsername
I0926 22:53:11.047125   21311 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/functional-615476/id_rsa Username:docker}
I0926 22:53:11.131136   21311 build_images.go:161] Building image from path: /tmp/build.2710132158.tar
I0926 22:53:11.131199   21311 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0926 22:53:11.146057   21311 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2710132158.tar
I0926 22:53:11.151634   21311 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2710132158.tar: stat -c "%s %y" /var/lib/minikube/build/build.2710132158.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2710132158.tar': No such file or directory
I0926 22:53:11.151670   21311 ssh_runner.go:362] scp /tmp/build.2710132158.tar --> /var/lib/minikube/build/build.2710132158.tar (3072 bytes)
I0926 22:53:11.185740   21311 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2710132158
I0926 22:53:11.199580   21311 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2710132158 -xf /var/lib/minikube/build/build.2710132158.tar
I0926 22:53:11.212641   21311 crio.go:315] Building image: /var/lib/minikube/build/build.2710132158
I0926 22:53:11.212717   21311 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-615476 /var/lib/minikube/build/build.2710132158 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0926 22:53:13.671703   21311 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-615476 /var/lib/minikube/build/build.2710132158 --cgroup-manager=cgroupfs: (2.458957244s)
I0926 22:53:13.671772   21311 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2710132158
I0926 22:53:13.686757   21311 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2710132158.tar
I0926 22:53:13.700994   21311 build_images.go:217] Built localhost/my-image:functional-615476 from /tmp/build.2710132158.tar
I0926 22:53:13.701031   21311 build_images.go:133] succeeded building to: functional-615476
I0926 22:53:13.701035   21311 build_images.go:134] failed building to: 
I0926 22:53:13.701057   21311 main.go:141] libmachine: Making call to close driver server
I0926 22:53:13.701069   21311 main.go:141] libmachine: (functional-615476) Calling .Close
I0926 22:53:13.701352   21311 main.go:141] libmachine: Successfully made call to close driver server
I0926 22:53:13.701379   21311 main.go:141] libmachine: Making call to close connection to plugin binary
I0926 22:53:13.701386   21311 main.go:141] libmachine: Making call to close driver server
I0926 22:53:13.701402   21311 main.go:141] libmachine: (functional-615476) Calling .Close
I0926 22:53:13.701414   21311 main.go:141] libmachine: (functional-615476) DBG | Closing plugin on server side
I0926 22:53:13.701597   21311 main.go:141] libmachine: Successfully made call to close driver server
I0926 22:53:13.701610   21311 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-615476
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 image load --daemon kicbase/echo-server:functional-615476 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-615476 image load --daemon kicbase/echo-server:functional-615476 --alsologtostderr: (1.154128222s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 image load --daemon kicbase/echo-server:functional-615476 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-615476
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 image load --daemon kicbase/echo-server:functional-615476 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 image save kicbase/echo-server:functional-615476 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 image rm kicbase/echo-server:functional-615476 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-615476
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 image save --daemon kicbase/echo-server:functional-615476 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-615476
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-615476 service list: (1.282674551s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-615476 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-615476 service list -o json: (1.27153918s)
functional_test.go:1504: Took "1.271640713s" to run "out/minikube-linux-amd64 -p functional-615476 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.27s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-615476
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-615476
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-615476
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (201.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-434910 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m21.162861947s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (201.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-434910 kubectl -- rollout status deployment/busybox: (4.158322535s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 kubectl -- exec busybox-7b57f96db7-78lkt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 kubectl -- exec busybox-7b57f96db7-d4k2n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 kubectl -- exec busybox-7b57f96db7-tvckc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 kubectl -- exec busybox-7b57f96db7-78lkt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 kubectl -- exec busybox-7b57f96db7-d4k2n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 kubectl -- exec busybox-7b57f96db7-tvckc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 kubectl -- exec busybox-7b57f96db7-78lkt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 kubectl -- exec busybox-7b57f96db7-d4k2n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 kubectl -- exec busybox-7b57f96db7-tvckc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 kubectl -- exec busybox-7b57f96db7-78lkt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 kubectl -- exec busybox-7b57f96db7-78lkt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 kubectl -- exec busybox-7b57f96db7-d4k2n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 kubectl -- exec busybox-7b57f96db7-d4k2n -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 kubectl -- exec busybox-7b57f96db7-tvckc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 kubectl -- exec busybox-7b57f96db7-tvckc -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-434910 node add --alsologtostderr -v 5: (43.908332717s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-434910 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 cp testdata/cp-test.txt ha-434910:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 cp ha-434910:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2067953208/001/cp-test_ha-434910.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 cp ha-434910:/home/docker/cp-test.txt ha-434910-m02:/home/docker/cp-test_ha-434910_ha-434910-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m02 "sudo cat /home/docker/cp-test_ha-434910_ha-434910-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 cp ha-434910:/home/docker/cp-test.txt ha-434910-m03:/home/docker/cp-test_ha-434910_ha-434910-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m03 "sudo cat /home/docker/cp-test_ha-434910_ha-434910-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 cp ha-434910:/home/docker/cp-test.txt ha-434910-m04:/home/docker/cp-test_ha-434910_ha-434910-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m04 "sudo cat /home/docker/cp-test_ha-434910_ha-434910-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 cp testdata/cp-test.txt ha-434910-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 cp ha-434910-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2067953208/001/cp-test_ha-434910-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 cp ha-434910-m02:/home/docker/cp-test.txt ha-434910:/home/docker/cp-test_ha-434910-m02_ha-434910.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910 "sudo cat /home/docker/cp-test_ha-434910-m02_ha-434910.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 cp ha-434910-m02:/home/docker/cp-test.txt ha-434910-m03:/home/docker/cp-test_ha-434910-m02_ha-434910-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m03 "sudo cat /home/docker/cp-test_ha-434910-m02_ha-434910-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 cp ha-434910-m02:/home/docker/cp-test.txt ha-434910-m04:/home/docker/cp-test_ha-434910-m02_ha-434910-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m04 "sudo cat /home/docker/cp-test_ha-434910-m02_ha-434910-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 cp testdata/cp-test.txt ha-434910-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 cp ha-434910-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2067953208/001/cp-test_ha-434910-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 cp ha-434910-m03:/home/docker/cp-test.txt ha-434910:/home/docker/cp-test_ha-434910-m03_ha-434910.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910 "sudo cat /home/docker/cp-test_ha-434910-m03_ha-434910.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 cp ha-434910-m03:/home/docker/cp-test.txt ha-434910-m02:/home/docker/cp-test_ha-434910-m03_ha-434910-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m02 "sudo cat /home/docker/cp-test_ha-434910-m03_ha-434910-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 cp ha-434910-m03:/home/docker/cp-test.txt ha-434910-m04:/home/docker/cp-test_ha-434910-m03_ha-434910-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m04 "sudo cat /home/docker/cp-test_ha-434910-m03_ha-434910-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 cp testdata/cp-test.txt ha-434910-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 cp ha-434910-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2067953208/001/cp-test_ha-434910-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 cp ha-434910-m04:/home/docker/cp-test.txt ha-434910:/home/docker/cp-test_ha-434910-m04_ha-434910.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910 "sudo cat /home/docker/cp-test_ha-434910-m04_ha-434910.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 cp ha-434910-m04:/home/docker/cp-test.txt ha-434910-m02:/home/docker/cp-test_ha-434910-m04_ha-434910-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m02 "sudo cat /home/docker/cp-test_ha-434910-m04_ha-434910-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 cp ha-434910-m04:/home/docker/cp-test.txt ha-434910-m03:/home/docker/cp-test_ha-434910-m04_ha-434910-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 ssh -n ha-434910-m03 "sudo cat /home/docker/cp-test_ha-434910-m04_ha-434910-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (85.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 node stop m02 --alsologtostderr -v 5
E0926 23:01:32.969582    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:01:51.009987    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:01:51.016398    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:01:51.027881    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:01:51.049277    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:01:51.090764    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:01:51.172242    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:01:51.333732    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:01:51.655457    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:01:52.297110    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:01:53.578729    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:01:56.141662    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:02:01.264009    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:02:11.506090    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:02:31.988075    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-434910 node stop m02 --alsologtostderr -v 5: (1m25.181272374s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434910 status --alsologtostderr -v 5: exit status 7 (688.352933ms)

                                                
                                                
-- stdout --
	ha-434910
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434910-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-434910-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434910-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 23:02:54.507010   27312 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:02:54.507294   27312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:02:54.507304   27312 out.go:374] Setting ErrFile to fd 2...
	I0926 23:02:54.507309   27312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:02:54.507568   27312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	I0926 23:02:54.507780   27312 out.go:368] Setting JSON to false
	I0926 23:02:54.507846   27312 mustload.go:65] Loading cluster: ha-434910
	I0926 23:02:54.507935   27312 notify.go:220] Checking for updates...
	I0926 23:02:54.508340   27312 config.go:182] Loaded profile config "ha-434910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:02:54.508356   27312 status.go:174] checking status of ha-434910 ...
	I0926 23:02:54.508812   27312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:02:54.508875   27312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:02:54.527123   27312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46379
	I0926 23:02:54.527624   27312 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:02:54.528179   27312 main.go:141] libmachine: Using API Version  1
	I0926 23:02:54.528215   27312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:02:54.528558   27312 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:02:54.528742   27312 main.go:141] libmachine: (ha-434910) Calling .GetState
	I0926 23:02:54.530860   27312 status.go:371] ha-434910 host status = "Running" (err=<nil>)
	I0926 23:02:54.530878   27312 host.go:66] Checking if "ha-434910" exists ...
	I0926 23:02:54.531183   27312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:02:54.531233   27312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:02:54.545203   27312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41733
	I0926 23:02:54.545674   27312 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:02:54.546103   27312 main.go:141] libmachine: Using API Version  1
	I0926 23:02:54.546127   27312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:02:54.546550   27312 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:02:54.546732   27312 main.go:141] libmachine: (ha-434910) Calling .GetIP
	I0926 23:02:54.549901   27312 main.go:141] libmachine: (ha-434910) DBG | domain ha-434910 has defined MAC address 52:54:00:b1:9c:18 in network mk-ha-434910
	I0926 23:02:54.550449   27312 main.go:141] libmachine: (ha-434910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:9c:18", ip: ""} in network mk-ha-434910: {Iface:virbr1 ExpiryTime:2025-09-26 23:57:16 +0000 UTC Type:0 Mac:52:54:00:b1:9c:18 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-434910 Clientid:01:52:54:00:b1:9c:18}
	I0926 23:02:54.550475   27312 main.go:141] libmachine: (ha-434910) DBG | domain ha-434910 has defined IP address 192.168.39.73 and MAC address 52:54:00:b1:9c:18 in network mk-ha-434910
	I0926 23:02:54.550657   27312 host.go:66] Checking if "ha-434910" exists ...
	I0926 23:02:54.550976   27312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:02:54.551013   27312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:02:54.564597   27312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42661
	I0926 23:02:54.565219   27312 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:02:54.565747   27312 main.go:141] libmachine: Using API Version  1
	I0926 23:02:54.565767   27312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:02:54.566231   27312 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:02:54.566449   27312 main.go:141] libmachine: (ha-434910) Calling .DriverName
	I0926 23:02:54.566656   27312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 23:02:54.566678   27312 main.go:141] libmachine: (ha-434910) Calling .GetSSHHostname
	I0926 23:02:54.569722   27312 main.go:141] libmachine: (ha-434910) DBG | domain ha-434910 has defined MAC address 52:54:00:b1:9c:18 in network mk-ha-434910
	I0926 23:02:54.570226   27312 main.go:141] libmachine: (ha-434910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:9c:18", ip: ""} in network mk-ha-434910: {Iface:virbr1 ExpiryTime:2025-09-26 23:57:16 +0000 UTC Type:0 Mac:52:54:00:b1:9c:18 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-434910 Clientid:01:52:54:00:b1:9c:18}
	I0926 23:02:54.570241   27312 main.go:141] libmachine: (ha-434910) DBG | domain ha-434910 has defined IP address 192.168.39.73 and MAC address 52:54:00:b1:9c:18 in network mk-ha-434910
	I0926 23:02:54.570475   27312 main.go:141] libmachine: (ha-434910) Calling .GetSSHPort
	I0926 23:02:54.570660   27312 main.go:141] libmachine: (ha-434910) Calling .GetSSHKeyPath
	I0926 23:02:54.570820   27312 main.go:141] libmachine: (ha-434910) Calling .GetSSHUsername
	I0926 23:02:54.570983   27312 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/ha-434910/id_rsa Username:docker}
	I0926 23:02:54.659980   27312 ssh_runner.go:195] Run: systemctl --version
	I0926 23:02:54.668303   27312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:02:54.693269   27312 kubeconfig.go:125] found "ha-434910" server: "https://192.168.39.254:8443"
	I0926 23:02:54.693306   27312 api_server.go:166] Checking apiserver status ...
	I0926 23:02:54.693346   27312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:02:54.715842   27312 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1421/cgroup
	W0926 23:02:54.732705   27312 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1421/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0926 23:02:54.732777   27312 ssh_runner.go:195] Run: ls
	I0926 23:02:54.738514   27312 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0926 23:02:54.745014   27312 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0926 23:02:54.745048   27312 status.go:463] ha-434910 apiserver status = Running (err=<nil>)
	I0926 23:02:54.745063   27312 status.go:176] ha-434910 status: &{Name:ha-434910 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:02:54.745085   27312 status.go:174] checking status of ha-434910-m02 ...
	I0926 23:02:54.745439   27312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:02:54.745493   27312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:02:54.760473   27312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36241
	I0926 23:02:54.760943   27312 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:02:54.761468   27312 main.go:141] libmachine: Using API Version  1
	I0926 23:02:54.761494   27312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:02:54.761881   27312 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:02:54.762072   27312 main.go:141] libmachine: (ha-434910-m02) Calling .GetState
	I0926 23:02:54.763968   27312 status.go:371] ha-434910-m02 host status = "Stopped" (err=<nil>)
	I0926 23:02:54.763982   27312 status.go:384] host is not running, skipping remaining checks
	I0926 23:02:54.763987   27312 status.go:176] ha-434910-m02 status: &{Name:ha-434910-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:02:54.764005   27312 status.go:174] checking status of ha-434910-m03 ...
	I0926 23:02:54.764305   27312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:02:54.764341   27312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:02:54.778392   27312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35117
	I0926 23:02:54.778779   27312 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:02:54.779244   27312 main.go:141] libmachine: Using API Version  1
	I0926 23:02:54.779266   27312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:02:54.779590   27312 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:02:54.779759   27312 main.go:141] libmachine: (ha-434910-m03) Calling .GetState
	I0926 23:02:54.781543   27312 status.go:371] ha-434910-m03 host status = "Running" (err=<nil>)
	I0926 23:02:54.781559   27312 host.go:66] Checking if "ha-434910-m03" exists ...
	I0926 23:02:54.781874   27312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:02:54.781927   27312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:02:54.796056   27312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40937
	I0926 23:02:54.796465   27312 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:02:54.796930   27312 main.go:141] libmachine: Using API Version  1
	I0926 23:02:54.796955   27312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:02:54.797265   27312 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:02:54.797455   27312 main.go:141] libmachine: (ha-434910-m03) Calling .GetIP
	I0926 23:02:54.801357   27312 main.go:141] libmachine: (ha-434910-m03) DBG | domain ha-434910-m03 has defined MAC address 52:54:00:f6:37:2b in network mk-ha-434910
	I0926 23:02:54.801929   27312 main.go:141] libmachine: (ha-434910-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:37:2b", ip: ""} in network mk-ha-434910: {Iface:virbr1 ExpiryTime:2025-09-26 23:59:17 +0000 UTC Type:0 Mac:52:54:00:f6:37:2b Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-434910-m03 Clientid:01:52:54:00:f6:37:2b}
	I0926 23:02:54.801958   27312 main.go:141] libmachine: (ha-434910-m03) DBG | domain ha-434910-m03 has defined IP address 192.168.39.114 and MAC address 52:54:00:f6:37:2b in network mk-ha-434910
	I0926 23:02:54.802183   27312 host.go:66] Checking if "ha-434910-m03" exists ...
	I0926 23:02:54.802553   27312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:02:54.802599   27312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:02:54.818124   27312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41129
	I0926 23:02:54.818745   27312 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:02:54.819302   27312 main.go:141] libmachine: Using API Version  1
	I0926 23:02:54.819334   27312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:02:54.819793   27312 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:02:54.820047   27312 main.go:141] libmachine: (ha-434910-m03) Calling .DriverName
	I0926 23:02:54.820276   27312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 23:02:54.820300   27312 main.go:141] libmachine: (ha-434910-m03) Calling .GetSSHHostname
	I0926 23:02:54.823669   27312 main.go:141] libmachine: (ha-434910-m03) DBG | domain ha-434910-m03 has defined MAC address 52:54:00:f6:37:2b in network mk-ha-434910
	I0926 23:02:54.824237   27312 main.go:141] libmachine: (ha-434910-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:37:2b", ip: ""} in network mk-ha-434910: {Iface:virbr1 ExpiryTime:2025-09-26 23:59:17 +0000 UTC Type:0 Mac:52:54:00:f6:37:2b Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-434910-m03 Clientid:01:52:54:00:f6:37:2b}
	I0926 23:02:54.824273   27312 main.go:141] libmachine: (ha-434910-m03) DBG | domain ha-434910-m03 has defined IP address 192.168.39.114 and MAC address 52:54:00:f6:37:2b in network mk-ha-434910
	I0926 23:02:54.824462   27312 main.go:141] libmachine: (ha-434910-m03) Calling .GetSSHPort
	I0926 23:02:54.824662   27312 main.go:141] libmachine: (ha-434910-m03) Calling .GetSSHKeyPath
	I0926 23:02:54.824848   27312 main.go:141] libmachine: (ha-434910-m03) Calling .GetSSHUsername
	I0926 23:02:54.825051   27312 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/ha-434910-m03/id_rsa Username:docker}
	I0926 23:02:54.910642   27312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:02:54.932340   27312 kubeconfig.go:125] found "ha-434910" server: "https://192.168.39.254:8443"
	I0926 23:02:54.932373   27312 api_server.go:166] Checking apiserver status ...
	I0926 23:02:54.932416   27312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:02:54.955696   27312 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1830/cgroup
	W0926 23:02:54.970655   27312 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1830/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0926 23:02:54.970738   27312 ssh_runner.go:195] Run: ls
	I0926 23:02:54.978969   27312 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0926 23:02:54.986285   27312 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0926 23:02:54.986312   27312 status.go:463] ha-434910-m03 apiserver status = Running (err=<nil>)
	I0926 23:02:54.986323   27312 status.go:176] ha-434910-m03 status: &{Name:ha-434910-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:02:54.986343   27312 status.go:174] checking status of ha-434910-m04 ...
	I0926 23:02:54.986752   27312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:02:54.986800   27312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:02:55.000975   27312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33949
	I0926 23:02:55.001477   27312 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:02:55.001962   27312 main.go:141] libmachine: Using API Version  1
	I0926 23:02:55.001984   27312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:02:55.002430   27312 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:02:55.002646   27312 main.go:141] libmachine: (ha-434910-m04) Calling .GetState
	I0926 23:02:55.004596   27312 status.go:371] ha-434910-m04 host status = "Running" (err=<nil>)
	I0926 23:02:55.004621   27312 host.go:66] Checking if "ha-434910-m04" exists ...
	I0926 23:02:55.004924   27312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:02:55.004979   27312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:02:55.019267   27312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41919
	I0926 23:02:55.019763   27312 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:02:55.020306   27312 main.go:141] libmachine: Using API Version  1
	I0926 23:02:55.020331   27312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:02:55.020708   27312 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:02:55.020951   27312 main.go:141] libmachine: (ha-434910-m04) Calling .GetIP
	I0926 23:02:55.024082   27312 main.go:141] libmachine: (ha-434910-m04) DBG | domain ha-434910-m04 has defined MAC address 52:54:00:af:fe:3f in network mk-ha-434910
	I0926 23:02:55.024559   27312 main.go:141] libmachine: (ha-434910-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:fe:3f", ip: ""} in network mk-ha-434910: {Iface:virbr1 ExpiryTime:2025-09-27 00:00:47 +0000 UTC Type:0 Mac:52:54:00:af:fe:3f Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-434910-m04 Clientid:01:52:54:00:af:fe:3f}
	I0926 23:02:55.024599   27312 main.go:141] libmachine: (ha-434910-m04) DBG | domain ha-434910-m04 has defined IP address 192.168.39.168 and MAC address 52:54:00:af:fe:3f in network mk-ha-434910
	I0926 23:02:55.024776   27312 host.go:66] Checking if "ha-434910-m04" exists ...
	I0926 23:02:55.025094   27312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:02:55.025147   27312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:02:55.038736   27312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39131
	I0926 23:02:55.039381   27312 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:02:55.039967   27312 main.go:141] libmachine: Using API Version  1
	I0926 23:02:55.040015   27312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:02:55.040393   27312 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:02:55.040558   27312 main.go:141] libmachine: (ha-434910-m04) Calling .DriverName
	I0926 23:02:55.040771   27312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 23:02:55.040794   27312 main.go:141] libmachine: (ha-434910-m04) Calling .GetSSHHostname
	I0926 23:02:55.044250   27312 main.go:141] libmachine: (ha-434910-m04) DBG | domain ha-434910-m04 has defined MAC address 52:54:00:af:fe:3f in network mk-ha-434910
	I0926 23:02:55.044778   27312 main.go:141] libmachine: (ha-434910-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:fe:3f", ip: ""} in network mk-ha-434910: {Iface:virbr1 ExpiryTime:2025-09-27 00:00:47 +0000 UTC Type:0 Mac:52:54:00:af:fe:3f Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-434910-m04 Clientid:01:52:54:00:af:fe:3f}
	I0926 23:02:55.044815   27312 main.go:141] libmachine: (ha-434910-m04) DBG | domain ha-434910-m04 has defined IP address 192.168.39.168 and MAC address 52:54:00:af:fe:3f in network mk-ha-434910
	I0926 23:02:55.045026   27312 main.go:141] libmachine: (ha-434910-m04) Calling .GetSSHPort
	I0926 23:02:55.045197   27312 main.go:141] libmachine: (ha-434910-m04) Calling .GetSSHKeyPath
	I0926 23:02:55.045363   27312 main.go:141] libmachine: (ha-434910-m04) Calling .GetSSHUsername
	I0926 23:02:55.045484   27312 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/ha-434910-m04/id_rsa Username:docker}
	I0926 23:02:55.130019   27312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:02:55.149574   27312 status.go:176] ha-434910-m04 status: &{Name:ha-434910-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (85.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (34.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 node start m02 --alsologtostderr -v 5
E0926 23:02:56.035890    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:03:12.950119    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-434910 node start m02 --alsologtostderr -v 5: (33.616973677s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-434910 status --alsologtostderr -v 5: (1.143547742s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (34.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 stop --alsologtostderr -v 5
E0926 23:04:34.872056    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:06:32.969213    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:06:51.012662    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:07:18.719165    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-434910 stop --alsologtostderr -v 5: (4m6.973789189s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-434910 start --wait true --alsologtostderr -v 5: (2m3.215299868s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-434910 node delete m03 --alsologtostderr -v 5: (17.673074046s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (268.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 stop --alsologtostderr -v 5
E0926 23:11:32.969624    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:11:51.007493    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-434910 stop --alsologtostderr -v 5: (4m28.241253114s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434910 status --alsologtostderr -v 5: exit status 7 (95.984978ms)

                                                
                                                
-- stdout --
	ha-434910
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-434910-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-434910-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 23:14:29.403503   31285 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:14:29.403638   31285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:14:29.403650   31285 out.go:374] Setting ErrFile to fd 2...
	I0926 23:14:29.403658   31285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:14:29.403894   31285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	I0926 23:14:29.404089   31285 out.go:368] Setting JSON to false
	I0926 23:14:29.404135   31285 mustload.go:65] Loading cluster: ha-434910
	I0926 23:14:29.404252   31285 notify.go:220] Checking for updates...
	I0926 23:14:29.404519   31285 config.go:182] Loaded profile config "ha-434910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:14:29.404535   31285 status.go:174] checking status of ha-434910 ...
	I0926 23:14:29.405009   31285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:14:29.405054   31285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:14:29.418297   31285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41991
	I0926 23:14:29.418743   31285 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:14:29.419346   31285 main.go:141] libmachine: Using API Version  1
	I0926 23:14:29.419373   31285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:14:29.419726   31285 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:14:29.419902   31285 main.go:141] libmachine: (ha-434910) Calling .GetState
	I0926 23:14:29.421566   31285 status.go:371] ha-434910 host status = "Stopped" (err=<nil>)
	I0926 23:14:29.421580   31285 status.go:384] host is not running, skipping remaining checks
	I0926 23:14:29.421588   31285 status.go:176] ha-434910 status: &{Name:ha-434910 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:14:29.421613   31285 status.go:174] checking status of ha-434910-m02 ...
	I0926 23:14:29.421952   31285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:14:29.421997   31285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:14:29.435355   31285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33747
	I0926 23:14:29.435702   31285 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:14:29.436147   31285 main.go:141] libmachine: Using API Version  1
	I0926 23:14:29.436170   31285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:14:29.436536   31285 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:14:29.436735   31285 main.go:141] libmachine: (ha-434910-m02) Calling .GetState
	I0926 23:14:29.438303   31285 status.go:371] ha-434910-m02 host status = "Stopped" (err=<nil>)
	I0926 23:14:29.438314   31285 status.go:384] host is not running, skipping remaining checks
	I0926 23:14:29.438319   31285 status.go:176] ha-434910-m02 status: &{Name:ha-434910-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:14:29.438341   31285 status.go:174] checking status of ha-434910-m04 ...
	I0926 23:14:29.438647   31285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:14:29.438712   31285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:14:29.451553   31285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46397
	I0926 23:14:29.452015   31285 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:14:29.452403   31285 main.go:141] libmachine: Using API Version  1
	I0926 23:14:29.452419   31285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:14:29.452747   31285 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:14:29.452954   31285 main.go:141] libmachine: (ha-434910-m04) Calling .GetState
	I0926 23:14:29.454940   31285 status.go:371] ha-434910-m04 host status = "Stopped" (err=<nil>)
	I0926 23:14:29.454954   31285 status.go:384] host is not running, skipping remaining checks
	I0926 23:14:29.454959   31285 status.go:176] ha-434910-m04 status: &{Name:ha-434910-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (268.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (96.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-434910 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m35.316668405s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (96.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 node add --control-plane --alsologtostderr -v 5
E0926 23:16:32.969047    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:16:51.015219    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-434910 node add --control-plane --alsologtostderr -v 5: (1m16.31594117s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-434910 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-684925 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0926 23:18:14.080886    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-684925 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m22.840208283s)
--- PASS: TestJSONOutput/start/Command (82.84s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.79s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-684925 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-684925 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.54s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-684925 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-684925 --output=json --user=testUser: (8.540616022s)
--- PASS: TestJSONOutput/stop/Command (8.54s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-909016 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-909016 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (67.234432ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5f01895b-bb59-4f02-88bb-992900a69612","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-909016] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1b4f66dd-6136-43fb-8adb-275e7a0222ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21642"}}
	{"specversion":"1.0","id":"49c165e9-72c3-4c0c-af97-89272008bc05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"82064506-9b2e-4a8e-8b33-e022826f6123","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig"}}
	{"specversion":"1.0","id":"27606aa3-d1f6-40ea-ab73-ba669e43a4f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube"}}
	{"specversion":"1.0","id":"9287f4fe-6a78-4d38-bc58-ae161dea3663","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8df24038-e02e-4a36-9d16-a6ca46487c4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"484a7734-87ba-439c-bfda-816ecf9c51f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-909016" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-909016
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (88.39s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-003548 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0926 23:19:36.039658    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-003548 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.856012915s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-015679 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-015679 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.71346479s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-003548
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-015679
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-015679" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-015679
helpers_test.go:175: Cleaning up "first-003548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-003548
--- PASS: TestMinikubeProfile (88.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (23.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-246054 --memory=3072 --mount-string /tmp/TestMountStartserial947039701/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-246054 --memory=3072 --mount-string /tmp/TestMountStartserial947039701/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (22.757766115s)
--- PASS: TestMountStart/serial/StartWithMountFirst (23.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-246054 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-246054 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-258540 --memory=3072 --mount-string /tmp/TestMountStartserial947039701/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-258540 --memory=3072 --mount-string /tmp/TestMountStartserial947039701/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (23.046828419s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-258540 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-258540 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-246054 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-258540 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-258540 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-258540
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-258540: (1.345728138s)
--- PASS: TestMountStart/serial/Stop (1.35s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.48s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-258540
E0926 23:21:32.970434    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-258540: (19.476339873s)
--- PASS: TestMountStart/serial/RestartStopped (20.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-258540 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-258540 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (101.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-703869 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0926 23:21:51.008326    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-703869 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m40.574427787s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (101.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703869 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703869 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-703869 -- rollout status deployment/busybox: (3.364561555s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703869 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703869 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703869 -- exec busybox-7b57f96db7-njb4z -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703869 -- exec busybox-7b57f96db7-sgdgx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703869 -- exec busybox-7b57f96db7-njb4z -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703869 -- exec busybox-7b57f96db7-sgdgx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703869 -- exec busybox-7b57f96db7-njb4z -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703869 -- exec busybox-7b57f96db7-sgdgx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.85s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703869 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703869 -- exec busybox-7b57f96db7-njb4z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703869 -- exec busybox-7b57f96db7-njb4z -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703869 -- exec busybox-7b57f96db7-sgdgx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-703869 -- exec busybox-7b57f96db7-sgdgx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-703869 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-703869 -v=5 --alsologtostderr: (41.373121205s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.97s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-703869 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 cp testdata/cp-test.txt multinode-703869:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 ssh -n multinode-703869 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 cp multinode-703869:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1800364836/001/cp-test_multinode-703869.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 ssh -n multinode-703869 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 cp multinode-703869:/home/docker/cp-test.txt multinode-703869-m02:/home/docker/cp-test_multinode-703869_multinode-703869-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 ssh -n multinode-703869 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 ssh -n multinode-703869-m02 "sudo cat /home/docker/cp-test_multinode-703869_multinode-703869-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 cp multinode-703869:/home/docker/cp-test.txt multinode-703869-m03:/home/docker/cp-test_multinode-703869_multinode-703869-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 ssh -n multinode-703869 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 ssh -n multinode-703869-m03 "sudo cat /home/docker/cp-test_multinode-703869_multinode-703869-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 cp testdata/cp-test.txt multinode-703869-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 ssh -n multinode-703869-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 cp multinode-703869-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1800364836/001/cp-test_multinode-703869-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 ssh -n multinode-703869-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 cp multinode-703869-m02:/home/docker/cp-test.txt multinode-703869:/home/docker/cp-test_multinode-703869-m02_multinode-703869.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 ssh -n multinode-703869-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 ssh -n multinode-703869 "sudo cat /home/docker/cp-test_multinode-703869-m02_multinode-703869.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 cp multinode-703869-m02:/home/docker/cp-test.txt multinode-703869-m03:/home/docker/cp-test_multinode-703869-m02_multinode-703869-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 ssh -n multinode-703869-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 ssh -n multinode-703869-m03 "sudo cat /home/docker/cp-test_multinode-703869-m02_multinode-703869-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 cp testdata/cp-test.txt multinode-703869-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 ssh -n multinode-703869-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 cp multinode-703869-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1800364836/001/cp-test_multinode-703869-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 ssh -n multinode-703869-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 cp multinode-703869-m03:/home/docker/cp-test.txt multinode-703869:/home/docker/cp-test_multinode-703869-m03_multinode-703869.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 ssh -n multinode-703869-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 ssh -n multinode-703869 "sudo cat /home/docker/cp-test_multinode-703869-m03_multinode-703869.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 cp multinode-703869-m03:/home/docker/cp-test.txt multinode-703869-m02:/home/docker/cp-test_multinode-703869-m03_multinode-703869-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 ssh -n multinode-703869-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 ssh -n multinode-703869-m02 "sudo cat /home/docker/cp-test_multinode-703869-m03_multinode-703869-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-703869 node stop m03: (1.626208168s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-703869 status: exit status 7 (456.041137ms)

                                                
                                                
-- stdout --
	multinode-703869
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-703869-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-703869-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-703869 status --alsologtostderr: exit status 7 (441.334437ms)

                                                
                                                
-- stdout --
	multinode-703869
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-703869-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-703869-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 23:24:21.387277   39380 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:24:21.387576   39380 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:24:21.387588   39380 out.go:374] Setting ErrFile to fd 2...
	I0926 23:24:21.387594   39380 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:24:21.387792   39380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	I0926 23:24:21.388036   39380 out.go:368] Setting JSON to false
	I0926 23:24:21.388083   39380 mustload.go:65] Loading cluster: multinode-703869
	I0926 23:24:21.388174   39380 notify.go:220] Checking for updates...
	I0926 23:24:21.388532   39380 config.go:182] Loaded profile config "multinode-703869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:24:21.388550   39380 status.go:174] checking status of multinode-703869 ...
	I0926 23:24:21.389173   39380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:24:21.389224   39380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:24:21.404154   39380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36017
	I0926 23:24:21.404623   39380 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:24:21.405187   39380 main.go:141] libmachine: Using API Version  1
	I0926 23:24:21.405210   39380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:24:21.405570   39380 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:24:21.405847   39380 main.go:141] libmachine: (multinode-703869) Calling .GetState
	I0926 23:24:21.407818   39380 status.go:371] multinode-703869 host status = "Running" (err=<nil>)
	I0926 23:24:21.407858   39380 host.go:66] Checking if "multinode-703869" exists ...
	I0926 23:24:21.408191   39380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:24:21.408229   39380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:24:21.422755   39380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39529
	I0926 23:24:21.423185   39380 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:24:21.423552   39380 main.go:141] libmachine: Using API Version  1
	I0926 23:24:21.423574   39380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:24:21.424026   39380 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:24:21.424224   39380 main.go:141] libmachine: (multinode-703869) Calling .GetIP
	I0926 23:24:21.427802   39380 main.go:141] libmachine: (multinode-703869) DBG | domain multinode-703869 has defined MAC address 52:54:00:42:56:1a in network mk-multinode-703869
	I0926 23:24:21.428303   39380 main.go:141] libmachine: (multinode-703869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:56:1a", ip: ""} in network mk-multinode-703869: {Iface:virbr1 ExpiryTime:2025-09-27 00:21:58 +0000 UTC Type:0 Mac:52:54:00:42:56:1a Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-703869 Clientid:01:52:54:00:42:56:1a}
	I0926 23:24:21.428339   39380 main.go:141] libmachine: (multinode-703869) DBG | domain multinode-703869 has defined IP address 192.168.39.26 and MAC address 52:54:00:42:56:1a in network mk-multinode-703869
	I0926 23:24:21.428486   39380 host.go:66] Checking if "multinode-703869" exists ...
	I0926 23:24:21.428820   39380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:24:21.428881   39380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:24:21.442811   39380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40357
	I0926 23:24:21.443237   39380 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:24:21.443746   39380 main.go:141] libmachine: Using API Version  1
	I0926 23:24:21.443768   39380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:24:21.444099   39380 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:24:21.444317   39380 main.go:141] libmachine: (multinode-703869) Calling .DriverName
	I0926 23:24:21.444494   39380 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 23:24:21.444516   39380 main.go:141] libmachine: (multinode-703869) Calling .GetSSHHostname
	I0926 23:24:21.447656   39380 main.go:141] libmachine: (multinode-703869) DBG | domain multinode-703869 has defined MAC address 52:54:00:42:56:1a in network mk-multinode-703869
	I0926 23:24:21.448256   39380 main.go:141] libmachine: (multinode-703869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:56:1a", ip: ""} in network mk-multinode-703869: {Iface:virbr1 ExpiryTime:2025-09-27 00:21:58 +0000 UTC Type:0 Mac:52:54:00:42:56:1a Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-703869 Clientid:01:52:54:00:42:56:1a}
	I0926 23:24:21.448292   39380 main.go:141] libmachine: (multinode-703869) DBG | domain multinode-703869 has defined IP address 192.168.39.26 and MAC address 52:54:00:42:56:1a in network mk-multinode-703869
	I0926 23:24:21.448466   39380 main.go:141] libmachine: (multinode-703869) Calling .GetSSHPort
	I0926 23:24:21.448650   39380 main.go:141] libmachine: (multinode-703869) Calling .GetSSHKeyPath
	I0926 23:24:21.448787   39380 main.go:141] libmachine: (multinode-703869) Calling .GetSSHUsername
	I0926 23:24:21.448921   39380 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/multinode-703869/id_rsa Username:docker}
	I0926 23:24:21.535866   39380 ssh_runner.go:195] Run: systemctl --version
	I0926 23:24:21.543443   39380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:24:21.561226   39380 kubeconfig.go:125] found "multinode-703869" server: "https://192.168.39.26:8443"
	I0926 23:24:21.561261   39380 api_server.go:166] Checking apiserver status ...
	I0926 23:24:21.561293   39380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:24:21.583764   39380 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1369/cgroup
	W0926 23:24:21.596330   39380 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1369/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0926 23:24:21.596386   39380 ssh_runner.go:195] Run: ls
	I0926 23:24:21.601977   39380 api_server.go:253] Checking apiserver healthz at https://192.168.39.26:8443/healthz ...
	I0926 23:24:21.607390   39380 api_server.go:279] https://192.168.39.26:8443/healthz returned 200:
	ok
	I0926 23:24:21.607415   39380 status.go:463] multinode-703869 apiserver status = Running (err=<nil>)
	I0926 23:24:21.607424   39380 status.go:176] multinode-703869 status: &{Name:multinode-703869 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:24:21.607448   39380 status.go:174] checking status of multinode-703869-m02 ...
	I0926 23:24:21.607736   39380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:24:21.607771   39380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:24:21.621722   39380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42401
	I0926 23:24:21.622220   39380 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:24:21.622704   39380 main.go:141] libmachine: Using API Version  1
	I0926 23:24:21.622734   39380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:24:21.623041   39380 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:24:21.623237   39380 main.go:141] libmachine: (multinode-703869-m02) Calling .GetState
	I0926 23:24:21.625066   39380 status.go:371] multinode-703869-m02 host status = "Running" (err=<nil>)
	I0926 23:24:21.625093   39380 host.go:66] Checking if "multinode-703869-m02" exists ...
	I0926 23:24:21.625366   39380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:24:21.625397   39380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:24:21.639718   39380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45091
	I0926 23:24:21.640143   39380 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:24:21.640685   39380 main.go:141] libmachine: Using API Version  1
	I0926 23:24:21.640706   39380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:24:21.641044   39380 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:24:21.641243   39380 main.go:141] libmachine: (multinode-703869-m02) Calling .GetIP
	I0926 23:24:21.645198   39380 main.go:141] libmachine: (multinode-703869-m02) DBG | domain multinode-703869-m02 has defined MAC address 52:54:00:62:c1:35 in network mk-multinode-703869
	I0926 23:24:21.645722   39380 main.go:141] libmachine: (multinode-703869-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c1:35", ip: ""} in network mk-multinode-703869: {Iface:virbr1 ExpiryTime:2025-09-27 00:22:53 +0000 UTC Type:0 Mac:52:54:00:62:c1:35 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:multinode-703869-m02 Clientid:01:52:54:00:62:c1:35}
	I0926 23:24:21.645744   39380 main.go:141] libmachine: (multinode-703869-m02) DBG | domain multinode-703869-m02 has defined IP address 192.168.39.190 and MAC address 52:54:00:62:c1:35 in network mk-multinode-703869
	I0926 23:24:21.645946   39380 host.go:66] Checking if "multinode-703869-m02" exists ...
	I0926 23:24:21.646243   39380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:24:21.646282   39380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:24:21.660118   39380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36785
	I0926 23:24:21.660571   39380 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:24:21.661016   39380 main.go:141] libmachine: Using API Version  1
	I0926 23:24:21.661035   39380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:24:21.661382   39380 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:24:21.661582   39380 main.go:141] libmachine: (multinode-703869-m02) Calling .DriverName
	I0926 23:24:21.661776   39380 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 23:24:21.661801   39380 main.go:141] libmachine: (multinode-703869-m02) Calling .GetSSHHostname
	I0926 23:24:21.664795   39380 main.go:141] libmachine: (multinode-703869-m02) DBG | domain multinode-703869-m02 has defined MAC address 52:54:00:62:c1:35 in network mk-multinode-703869
	I0926 23:24:21.665214   39380 main.go:141] libmachine: (multinode-703869-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c1:35", ip: ""} in network mk-multinode-703869: {Iface:virbr1 ExpiryTime:2025-09-27 00:22:53 +0000 UTC Type:0 Mac:52:54:00:62:c1:35 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:multinode-703869-m02 Clientid:01:52:54:00:62:c1:35}
	I0926 23:24:21.665257   39380 main.go:141] libmachine: (multinode-703869-m02) DBG | domain multinode-703869-m02 has defined IP address 192.168.39.190 and MAC address 52:54:00:62:c1:35 in network mk-multinode-703869
	I0926 23:24:21.665433   39380 main.go:141] libmachine: (multinode-703869-m02) Calling .GetSSHPort
	I0926 23:24:21.665610   39380 main.go:141] libmachine: (multinode-703869-m02) Calling .GetSSHKeyPath
	I0926 23:24:21.665754   39380 main.go:141] libmachine: (multinode-703869-m02) Calling .GetSSHUsername
	I0926 23:24:21.665936   39380 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21642-6020/.minikube/machines/multinode-703869-m02/id_rsa Username:docker}
	I0926 23:24:21.746535   39380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:24:21.763667   39380 status.go:176] multinode-703869-m02 status: &{Name:multinode-703869-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:24:21.763705   39380 status.go:174] checking status of multinode-703869-m03 ...
	I0926 23:24:21.764077   39380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:24:21.764121   39380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:24:21.778023   39380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38769
	I0926 23:24:21.778471   39380 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:24:21.778931   39380 main.go:141] libmachine: Using API Version  1
	I0926 23:24:21.778953   39380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:24:21.779286   39380 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:24:21.779468   39380 main.go:141] libmachine: (multinode-703869-m03) Calling .GetState
	I0926 23:24:21.781173   39380 status.go:371] multinode-703869-m03 host status = "Stopped" (err=<nil>)
	I0926 23:24:21.781185   39380 status.go:384] host is not running, skipping remaining checks
	I0926 23:24:21.781190   39380 status.go:176] multinode-703869-m03 status: &{Name:multinode-703869-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.52s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-703869 node start m03 -v=5 --alsologtostderr: (41.4443597s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (42.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (303.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-703869
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-703869
E0926 23:26:32.970218    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:26:51.014527    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-703869: (2m56.996743416s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-703869 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-703869 --wait=true -v=5 --alsologtostderr: (2m5.944728912s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-703869
--- PASS: TestMultiNode/serial/RestartKeepsNodes (303.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-703869 node delete m03: (2.233591864s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (171.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 stop
E0926 23:31:32.969081    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:31:51.014858    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-703869 stop: (2m51.757941181s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-703869 status: exit status 7 (90.950327ms)

                                                
                                                
-- stdout --
	multinode-703869
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-703869-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-703869 status --alsologtostderr: exit status 7 (81.487338ms)

                                                
                                                
-- stdout --
	multinode-703869
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-703869-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 23:33:01.614541   42215 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:33:01.614646   42215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:33:01.614653   42215 out.go:374] Setting ErrFile to fd 2...
	I0926 23:33:01.614662   42215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:33:01.614916   42215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	I0926 23:33:01.615089   42215 out.go:368] Setting JSON to false
	I0926 23:33:01.615123   42215 mustload.go:65] Loading cluster: multinode-703869
	I0926 23:33:01.615252   42215 notify.go:220] Checking for updates...
	I0926 23:33:01.615489   42215 config.go:182] Loaded profile config "multinode-703869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:33:01.615503   42215 status.go:174] checking status of multinode-703869 ...
	I0926 23:33:01.615939   42215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:33:01.616036   42215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:33:01.630131   42215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43437
	I0926 23:33:01.630589   42215 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:33:01.631115   42215 main.go:141] libmachine: Using API Version  1
	I0926 23:33:01.631162   42215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:33:01.631548   42215 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:33:01.631755   42215 main.go:141] libmachine: (multinode-703869) Calling .GetState
	I0926 23:33:01.633771   42215 status.go:371] multinode-703869 host status = "Stopped" (err=<nil>)
	I0926 23:33:01.633785   42215 status.go:384] host is not running, skipping remaining checks
	I0926 23:33:01.633790   42215 status.go:176] multinode-703869 status: &{Name:multinode-703869 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:33:01.633821   42215 status.go:174] checking status of multinode-703869-m02 ...
	I0926 23:33:01.634133   42215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0926 23:33:01.634172   42215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0926 23:33:01.648563   42215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36209
	I0926 23:33:01.649135   42215 main.go:141] libmachine: () Calling .GetVersion
	I0926 23:33:01.649580   42215 main.go:141] libmachine: Using API Version  1
	I0926 23:33:01.649604   42215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0926 23:33:01.650046   42215 main.go:141] libmachine: () Calling .GetMachineName
	I0926 23:33:01.650321   42215 main.go:141] libmachine: (multinode-703869-m02) Calling .GetState
	I0926 23:33:01.652180   42215 status.go:371] multinode-703869-m02 host status = "Stopped" (err=<nil>)
	I0926 23:33:01.652193   42215 status.go:384] host is not running, skipping remaining checks
	I0926 23:33:01.652199   42215 status.go:176] multinode-703869-m02 status: &{Name:multinode-703869-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (171.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (87.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-703869 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-703869 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m27.071575195s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-703869 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (87.71s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-703869
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-703869-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-703869-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (63.600814ms)

                                                
                                                
-- stdout --
	* [multinode-703869-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-703869-m02' is duplicated with machine name 'multinode-703869-m02' in profile 'multinode-703869'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-703869-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0926 23:34:54.083298    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-703869-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.978464476s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-703869
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-703869: exit status 80 (233.644285ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-703869 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-703869-m03 already exists in multinode-703869-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-703869-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.19s)

                                                
                                    
x
+
TestScheduledStopUnix (111.4s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-558962 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-558962 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.669831713s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-558962 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-558962 -n scheduled-stop-558962
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-558962 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0926 23:38:01.660144    9914 retry.go:31] will retry after 93.103µs: open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/scheduled-stop-558962/pid: no such file or directory
I0926 23:38:01.661354    9914 retry.go:31] will retry after 86.051µs: open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/scheduled-stop-558962/pid: no such file or directory
I0926 23:38:01.662491    9914 retry.go:31] will retry after 243.415µs: open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/scheduled-stop-558962/pid: no such file or directory
I0926 23:38:01.663632    9914 retry.go:31] will retry after 180.436µs: open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/scheduled-stop-558962/pid: no such file or directory
I0926 23:38:01.664770    9914 retry.go:31] will retry after 444.436µs: open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/scheduled-stop-558962/pid: no such file or directory
I0926 23:38:01.665902    9914 retry.go:31] will retry after 531.541µs: open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/scheduled-stop-558962/pid: no such file or directory
I0926 23:38:01.667031    9914 retry.go:31] will retry after 823.085µs: open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/scheduled-stop-558962/pid: no such file or directory
I0926 23:38:01.668152    9914 retry.go:31] will retry after 1.389305ms: open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/scheduled-stop-558962/pid: no such file or directory
I0926 23:38:01.670349    9914 retry.go:31] will retry after 3.269657ms: open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/scheduled-stop-558962/pid: no such file or directory
I0926 23:38:01.674575    9914 retry.go:31] will retry after 2.978002ms: open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/scheduled-stop-558962/pid: no such file or directory
I0926 23:38:01.677788    9914 retry.go:31] will retry after 5.587743ms: open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/scheduled-stop-558962/pid: no such file or directory
I0926 23:38:01.684033    9914 retry.go:31] will retry after 12.557895ms: open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/scheduled-stop-558962/pid: no such file or directory
I0926 23:38:01.697330    9914 retry.go:31] will retry after 19.259267ms: open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/scheduled-stop-558962/pid: no such file or directory
I0926 23:38:01.717597    9914 retry.go:31] will retry after 19.771753ms: open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/scheduled-stop-558962/pid: no such file or directory
I0926 23:38:01.738185    9914 retry.go:31] will retry after 36.173379ms: open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/scheduled-stop-558962/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-558962 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-558962 -n scheduled-stop-558962
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-558962
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-558962 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-558962
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-558962: exit status 7 (64.714679ms)

                                                
                                                
-- stdout --
	scheduled-stop-558962
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-558962 -n scheduled-stop-558962
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-558962 -n scheduled-stop-558962: exit status 7 (63.448195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-558962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-558962
--- PASS: TestScheduledStopUnix (111.40s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (161.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2800982106 start -p running-upgrade-394797 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2800982106 start -p running-upgrade-394797 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m50.565822968s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-394797 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-394797 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (49.798595969s)
helpers_test.go:175: Cleaning up "running-upgrade-394797" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-394797
--- PASS: TestRunningBinaryUpgrade (161.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (199.89s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-410200 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-410200 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (45.019643075s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-410200
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-410200: (2.273241477s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-410200 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-410200 status --format={{.Host}}: exit status 7 (95.665661ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-410200 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-410200 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m13.192030838s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-410200 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-410200 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-410200 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (98.981786ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-410200] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-410200
	    minikube start -p kubernetes-upgrade-410200 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4102002 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-410200 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-410200 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0926 23:41:32.969862    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:41:51.007559    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-410200 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m18.149440709s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-410200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-410200
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-410200: (1.000649938s)
--- PASS: TestKubernetesUpgrade (199.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-389023 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-389023 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (73.357719ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-389023] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (86.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-389023 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-389023 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m26.492570766s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-389023 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (86.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (28.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-389023 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-389023 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (27.264677376s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-389023 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-389023 status -o json: exit status 2 (285.675351ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-389023","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-389023
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (28.43s)

                                                
                                    
x
+
TestPause/serial/Start (91.78s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-298014 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-298014 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m31.779528667s)
--- PASS: TestPause/serial/Start (91.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (49.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-389023 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-389023 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (49.191251964s)
--- PASS: TestNoKubernetes/serial/Start (49.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (105.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3180647656 start -p stopped-upgrade-217447 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3180647656 start -p stopped-upgrade-217447 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (50.247918288s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3180647656 -p stopped-upgrade-217447 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3180647656 -p stopped-upgrade-217447 stop: (1.841306796s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-217447 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-217447 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (53.749632533s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (105.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-389023 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-389023 "sudo systemctl is-active --quiet service kubelet": exit status 1 (196.765062ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-389023
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-389023: (1.343656227s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (39.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-389023 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-389023 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.080835149s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (39.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-389023 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-389023 "sudo systemctl is-active --quiet service kubelet": exit status 1 (206.72164ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-421834 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-421834 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (131.36386ms)

                                                
                                                
-- stdout --
	* [false-421834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 23:42:43.489721   49671 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:42:43.489936   49671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:42:43.489950   49671 out.go:374] Setting ErrFile to fd 2...
	I0926 23:42:43.489957   49671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:42:43.492651   49671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-6020/.minikube/bin
	I0926 23:42:43.493729   49671 out.go:368] Setting JSON to false
	I0926 23:42:43.494992   49671 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5108,"bootTime":1758925055,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 23:42:43.495115   49671 start.go:140] virtualization: kvm guest
	I0926 23:42:43.496531   49671 out.go:179] * [false-421834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 23:42:43.498078   49671 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 23:42:43.498094   49671 notify.go:220] Checking for updates...
	I0926 23:42:43.500966   49671 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 23:42:43.502233   49671 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-6020/kubeconfig
	I0926 23:42:43.503456   49671 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-6020/.minikube
	I0926 23:42:43.504998   49671 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 23:42:43.506387   49671 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 23:42:43.508266   49671 config.go:182] Loaded profile config "force-systemd-env-429303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:42:43.508486   49671 config.go:182] Loaded profile config "pause-298014": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:42:43.508622   49671 config.go:182] Loaded profile config "stopped-upgrade-217447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I0926 23:42:43.508840   49671 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 23:42:43.554307   49671 out.go:179] * Using the kvm2 driver based on user configuration
	I0926 23:42:43.555662   49671 start.go:304] selected driver: kvm2
	I0926 23:42:43.555687   49671 start.go:924] validating driver "kvm2" against <nil>
	I0926 23:42:43.555705   49671 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 23:42:43.557787   49671 out.go:203] 
	W0926 23:42:43.558696   49671 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0926 23:42:43.559909   49671 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-421834 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-421834

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-421834

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-421834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-421834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-421834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-421834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-421834

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-421834

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-421834

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-421834

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-421834

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-421834" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-421834" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:41:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.83.242:8443
name: pause-298014
contexts:
- context:
cluster: pause-298014
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:41:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-298014
name: pause-298014
current-context: ""
kind: Config
users:
- name: pause-298014
user:
client-certificate: /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/pause-298014/client.crt
client-key: /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/pause-298014/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-421834

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421834"

                                                
                                                
----------------------- debugLogs end: false-421834 [took: 3.40689479s] --------------------------------
helpers_test.go:175: Cleaning up "false-421834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-421834
--- PASS: TestNetworkPlugins/group/false (3.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-217447
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-217447: (1.018550683s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (103.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-770749 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-770749 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m43.195935801s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (103.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (119.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-534592 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-534592 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m59.568597858s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (119.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (101.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-059658 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-059658 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m41.925782936s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (101.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-770749 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [dde9de9e-2f8c-4237-ab1e-17e5feb90730] Pending
helpers_test.go:352: "busybox" [dde9de9e-2f8c-4237-ab1e-17e5feb90730] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [dde9de9e-2f8c-4237-ab1e-17e5feb90730] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.005190517s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-770749 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-770749 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-770749 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.698571902s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-770749 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (85.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-770749 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-770749 --alsologtostderr -v=3: (1m25.605060557s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (85.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-534592 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [02029043-2257-41d7-9def-e479f70e1873] Pending
helpers_test.go:352: "busybox" [02029043-2257-41d7-9def-e479f70e1873] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [02029043-2257-41d7-9def-e479f70e1873] Running
E0926 23:46:32.969581    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.006080855s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-534592 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-059658 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [894581c4-c927-4c47-bdd4-2f0e1a1a654b] Pending
helpers_test.go:352: "busybox" [894581c4-c927-4c47-bdd4-2f0e1a1a654b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [894581c4-c927-4c47-bdd4-2f0e1a1a654b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00411374s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-059658 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-534592 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-534592 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.005851167s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-534592 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (84.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-534592 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-534592 --alsologtostderr -v=3: (1m24.033449461s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (84.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-059658 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-059658 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (74.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-059658 --alsologtostderr -v=3
E0926 23:46:51.008430    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-059658 --alsologtostderr -v=3: (1m14.151205046s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (74.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-770749 -n old-k8s-version-770749
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-770749 -n old-k8s-version-770749: exit status 7 (64.172916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-770749 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-770749 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-770749 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (47.73879382s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-770749 -n old-k8s-version-770749
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-626774 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-626774 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (49.370088049s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gcqwq" [29e9a446-c14c-4668-ad07-26c585018c85] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gcqwq" [29e9a446-c14c-4668-ad07-26c585018c85] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.004216937s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-059658 -n default-k8s-diff-port-059658
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-059658 -n default-k8s-diff-port-059658: exit status 7 (62.425171ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-059658 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-059658 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-059658 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (55.467737022s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-059658 -n default-k8s-diff-port-059658
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-534592 -n no-preload-534592
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-534592 -n no-preload-534592: exit status 7 (65.736604ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-534592 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (85.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-534592 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-534592 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m25.175712462s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-534592 -n no-preload-534592
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (85.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gcqwq" [29e9a446-c14c-4668-ad07-26c585018c85] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005773189s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-770749 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-770749 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-770749 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-770749 --alsologtostderr -v=1: (1.091490685s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-770749 -n old-k8s-version-770749
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-770749 -n old-k8s-version-770749: exit status 2 (315.38735ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-770749 -n old-k8s-version-770749
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-770749 -n old-k8s-version-770749: exit status 2 (310.020781ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-770749 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-770749 -n old-k8s-version-770749
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-770749 -n old-k8s-version-770749
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (119.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-994238 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-994238 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m59.669317757s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (119.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-626774 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-626774 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.25500158s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-626774 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-626774 --alsologtostderr -v=3: (9.005778805s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-626774 -n newest-cni-626774
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-626774 -n newest-cni-626774: exit status 7 (78.103914ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-626774 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (60.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-626774 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-626774 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m0.219036599s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-626774 -n newest-cni-626774
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (60.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k5jvm" [9aba3f36-2759-4356-b6b6-ceaebf5fb355] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k5jvm" [9aba3f36-2759-4356-b6b6-ceaebf5fb355] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.00400882s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k5jvm" [9aba3f36-2759-4356-b6b6-ceaebf5fb355] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005332463s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-059658 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-059658 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-059658 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-059658 --alsologtostderr -v=1: (1.283085143s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-059658 -n default-k8s-diff-port-059658
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-059658 -n default-k8s-diff-port-059658: exit status 2 (326.323346ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-059658 -n default-k8s-diff-port-059658
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-059658 -n default-k8s-diff-port-059658: exit status 2 (334.736418ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-059658 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-059658 --alsologtostderr -v=1: (1.17560452s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-059658 -n default-k8s-diff-port-059658
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-059658 -n default-k8s-diff-port-059658
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (101.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-421834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-421834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m41.539752969s)
--- PASS: TestNetworkPlugins/group/auto/Start (101.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xh274" [da5493ab-d069-45ff-b141-6239e539afd7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007486466s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xh274" [da5493ab-d069-45ff-b141-6239e539afd7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004320384s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-534592 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-534592 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-534592 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-534592 --alsologtostderr -v=1: (1.144439443s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-534592 -n no-preload-534592
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-534592 -n no-preload-534592: exit status 2 (263.719179ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-534592 -n no-preload-534592
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-534592 -n no-preload-534592: exit status 2 (250.002099ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-534592 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-534592 -n no-preload-534592
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-534592 -n no-preload-534592
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (67.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-421834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-421834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m7.370935999s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (67.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-626774 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-626774 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-626774 --alsologtostderr -v=1: (1.470490968s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-626774 -n newest-cni-626774
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-626774 -n newest-cni-626774: exit status 2 (407.365611ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-626774 -n newest-cni-626774
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-626774 -n newest-cni-626774: exit status 2 (399.98333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-626774 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-626774 --alsologtostderr -v=1: (1.339580205s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-626774 -n newest-cni-626774
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-626774 -n newest-cni-626774
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (90.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-421834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-421834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m30.328063873s)
--- PASS: TestNetworkPlugins/group/calico/Start (90.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-994238 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8df1ce95-e9fb-4055-b0b2-1cba8175d80c] Pending
helpers_test.go:352: "busybox" [8df1ce95-e9fb-4055-b0b2-1cba8175d80c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8df1ce95-e9fb-4055-b0b2-1cba8175d80c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004953131s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-994238 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-994238 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0926 23:50:26.314117    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:50:26.320545    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:50:26.331980    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:50:26.353451    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:50:26.394880    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:50:26.477182    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:50:26.639559    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:50:26.961092    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-994238 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.18791018s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-994238 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (85.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-994238 --alsologtostderr -v=3
E0926 23:50:27.603320    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:50:28.885670    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:50:31.447761    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:50:36.569965    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:50:46.811898    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-994238 --alsologtostderr -v=3: (1m25.251128687s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (85.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-8fr87" [96f1ba52-9750-4b09-8ddd-a38c606e1b40] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006239993s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-421834 "pgrep -a kubelet"
I0926 23:50:57.186610    9914 config.go:182] Loaded profile config "auto-421834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-421834 replace --force -f testdata/netcat-deployment.yaml
I0926 23:50:57.536360    9914 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-drfq9" [94c6c1d2-aa61-413f-b500-fd950605aad0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-drfq9" [94c6c1d2-aa61-413f-b500-fd950605aad0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.121578901s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-421834 "pgrep -a kubelet"
I0926 23:50:59.234997    9914 config.go:182] Loaded profile config "kindnet-421834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-421834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t8nmr" [67a44e80-a5a8-422e-8022-b01e5e6cf9fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t8nmr" [67a44e80-a5a8-422e-8022-b01e5e6cf9fa] Running
E0926 23:51:07.294104    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004475087s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-421834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-421834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-421834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-421834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-421834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-421834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-gpslt" [94726b51-3cf1-4a89-a1bf-a2de674c36a9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005026326s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (69.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-421834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-421834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m9.638823508s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (69.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (108.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-421834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0926 23:51:28.494804    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/no-preload-534592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:28.501317    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/no-preload-534592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:28.513372    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/no-preload-534592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:28.534896    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/no-preload-534592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:28.576402    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/no-preload-534592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:28.657904    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/no-preload-534592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:28.819498    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/no-preload-534592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:29.141230    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/no-preload-534592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-421834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m48.149644062s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (108.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-421834 "pgrep -a kubelet"
I0926 23:51:29.680699    9914 config.go:182] Loaded profile config "calico-421834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-421834 replace --force -f testdata/netcat-deployment.yaml
E0926 23:51:29.783469    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/no-preload-534592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mnx5q" [ca70f4a9-8a84-4850-a7ee-e82a04712b5e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0926 23:51:31.065536    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/no-preload-534592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:32.969481    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:33.627916    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/no-preload-534592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:34.084657    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:34.672806    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/default-k8s-diff-port-059658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:34.679236    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/default-k8s-diff-port-059658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:34.690686    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/default-k8s-diff-port-059658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:34.712178    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/default-k8s-diff-port-059658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:34.753726    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/default-k8s-diff-port-059658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:34.835245    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/default-k8s-diff-port-059658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-mnx5q" [ca70f4a9-8a84-4850-a7ee-e82a04712b5e] Running
E0926 23:51:34.997068    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/default-k8s-diff-port-059658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:35.318928    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/default-k8s-diff-port-059658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:35.960943    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/default-k8s-diff-port-059658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:37.242458    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/default-k8s-diff-port-059658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:38.749741    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/no-preload-534592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:51:39.803926    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/default-k8s-diff-port-059658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004350556s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-421834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-421834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-421834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-994238 -n embed-certs-994238
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-994238 -n embed-certs-994238: exit status 7 (90.936239ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-994238 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (63.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-994238 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
E0926 23:51:55.167070    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/default-k8s-diff-port-059658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-994238 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m2.756217607s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-994238 -n embed-certs-994238
E0926 23:52:56.611284    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/default-k8s-diff-port-059658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (63.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (102.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-421834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0926 23:52:09.473280    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/no-preload-534592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:52:15.649261    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/default-k8s-diff-port-059658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-421834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m42.387875845s)
--- PASS: TestNetworkPlugins/group/flannel/Start (102.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-421834 "pgrep -a kubelet"
I0926 23:52:33.929010    9914 config.go:182] Loaded profile config "custom-flannel-421834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-421834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ftpf5" [1f19b73a-0bcd-4cd7-abe0-466c672766e9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ftpf5" [1f19b73a-0bcd-4cd7-abe0-466c672766e9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005867641s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-421834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-421834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-421834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-421834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0926 23:53:10.178467    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-421834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m28.668762931s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-421834 "pgrep -a kubelet"
I0926 23:53:15.598117    9914 config.go:182] Loaded profile config "enable-default-cni-421834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-421834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fdhgd" [87bd56c5-fa6a-47bf-a1f8-0a6478eea3d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fdhgd" [87bd56c5-fa6a-47bf-a1f8-0a6478eea3d2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.006257173s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-421834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-421834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-421834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-l29dg" [034c010c-29f1-4c43-a544-f1c84cfa49b1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005924187s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-421834 "pgrep -a kubelet"
I0926 23:53:48.115059    9914 config.go:182] Loaded profile config "flannel-421834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-421834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-w77tj" [e90300f4-c194-4362-a85a-0ed4ada62fa3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-w77tj" [e90300f4-c194-4362-a85a-0ed4ada62fa3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004552198s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-421834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-421834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-421834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-421834 "pgrep -a kubelet"
I0926 23:54:32.064629    9914 config.go:182] Loaded profile config "bridge-421834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-421834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-j6w74" [659d5136-72bc-42e6-b80f-1f8c095278a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-j6w74" [659d5136-72bc-42e6-b80f-1f8c095278a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004067264s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-421834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-421834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-421834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)
E0926 23:55:26.314465    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:52.987609    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/kindnet-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:52.994108    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/kindnet-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:53.005640    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/kindnet-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:53.027175    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/kindnet-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:53.068657    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/kindnet-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:53.150189    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/kindnet-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:53.311848    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/kindnet-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:53.633897    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/kindnet-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:54.020885    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:54.275735    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/kindnet-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:55.557877    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/kindnet-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:57.509722    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/auto-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:57.516149    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/auto-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:57.527639    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/auto-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:57.549138    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/auto-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:57.590616    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/auto-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:57.672600    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/auto-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:57.834209    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/auto-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:58.119957    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/kindnet-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:58.156414    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/auto-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:55:58.798122    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/auto-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:00.079918    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/auto-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:02.641252    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/auto-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:03.241968    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/kindnet-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:07.762947    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/auto-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:13.483573    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/kindnet-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:18.004529    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/auto-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:23.450084    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/calico-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:23.456584    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/calico-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:23.468070    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/calico-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:23.489554    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/calico-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:23.531002    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/calico-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:23.612473    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/calico-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:23.774714    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/calico-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:24.096477    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/calico-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:24.738370    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/calico-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:26.020269    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/calico-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:28.495397    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/no-preload-534592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:28.581925    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/calico-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:32.968957    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:33.703884    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/calico-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:33.965532    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/kindnet-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:34.673280    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/default-k8s-diff-port-059658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:38.486244    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/auto-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:43.945387    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/calico-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:51.008191    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:56:56.198875    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/no-preload-534592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:57:02.374973    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/default-k8s-diff-port-059658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:57:04.427218    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/calico-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:57:14.927732    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/kindnet-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:57:19.448239    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/auto-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:57:34.193074    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/custom-flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:57:34.199477    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/custom-flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:57:34.210916    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/custom-flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:57:34.232306    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/custom-flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:57:34.273738    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/custom-flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:57:34.355186    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/custom-flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:57:34.516760    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/custom-flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:57:34.838937    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/custom-flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:57:35.481021    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/custom-flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:57:36.762940    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/custom-flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:57:39.325081    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/custom-flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:57:44.446943    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/custom-flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:57:45.389364    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/calico-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:57:54.688479    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/custom-flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:15.169976    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/custom-flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:15.937786    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/enable-default-cni-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:15.944188    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/enable-default-cni-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:15.955562    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/enable-default-cni-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:15.976936    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/enable-default-cni-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:16.018379    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/enable-default-cni-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:16.099856    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/enable-default-cni-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:16.261441    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/enable-default-cni-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:16.583422    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/enable-default-cni-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:17.224972    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/enable-default-cni-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:18.506988    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/enable-default-cni-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:21.068292    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/enable-default-cni-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:26.189724    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/enable-default-cni-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:36.432046    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/enable-default-cni-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:36.849957    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/kindnet-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:41.369850    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/auto-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:41.885673    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:41.892096    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:41.903485    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:41.924895    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:41.966385    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:42.047917    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:42.209451    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:42.531335    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:43.173252    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:44.455008    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:47.017060    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:52.138393    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:56.132312    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/custom-flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:58:56.913894    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/enable-default-cni-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:59:02.380740    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:59:07.311167    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/calico-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:59:22.862926    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:59:32.319407    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:59:32.325777    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:59:32.337193    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:59:32.358658    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:59:32.400168    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:59:32.481682    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:59:32.643230    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:59:32.965397    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:59:33.606877    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:59:34.888887    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:59:37.450512    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:59:37.875619    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/enable-default-cni-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:59:42.571838    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:59:52.813135    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:00:03.824420    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:00:13.295474    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:00:18.054593    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/custom-flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:00:26.314075    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/old-k8s-version-770749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:00:52.987347    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/kindnet-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:00:54.257402    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/bridge-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:00:57.509721    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/auto-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:00:59.796991    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/enable-default-cni-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:01:20.692082    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/kindnet-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:01:23.450954    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/calico-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:01:25.211978    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/auto-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:01:25.746526    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/flannel-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:01:28.494722    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/no-preload-534592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:01:32.969061    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/addons-330674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:01:34.672484    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/default-k8s-diff-port-059658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:01:51.008000    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/functional-615476/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0927 00:01:51.152568    9914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/calico-421834/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-994238 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-994238 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-994238 -n embed-certs-994238
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-994238 -n embed-certs-994238: exit status 2 (258.603221ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-994238 -n embed-certs-994238
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-994238 -n embed-certs-994238: exit status 2 (273.230941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-994238 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-994238 -n embed-certs-994238
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-994238 -n embed-certs-994238
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.88s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.0/cached-images 0
15 TestDownloadOnly/v1.34.0/binaries 0
16 TestDownloadOnly/v1.34.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
119 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.02
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
121 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
125 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
263 TestStartStop/group/disable-driver-mounts 0.34
278 TestNetworkPlugins/group/kubenet 2.98
286 TestNetworkPlugins/group/cilium 3.83
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-330674 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-280338" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-280338
--- SKIP: TestStartStop/group/disable-driver-mounts (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-421834 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-421834

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-421834

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-421834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-421834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-421834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-421834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-421834

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-421834

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-421834

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-421834

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-421834

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-421834" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-421834" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:41:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.83.242:8443
name: pause-298014
contexts:
- context:
cluster: pause-298014
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:41:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-298014
name: pause-298014
current-context: ""
kind: Config
users:
- name: pause-298014
user:
client-certificate: /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/pause-298014/client.crt
client-key: /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/pause-298014/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-421834

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421834"

                                                
                                                
----------------------- debugLogs end: kubenet-421834 [took: 2.82053954s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-421834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-421834
--- SKIP: TestNetworkPlugins/group/kubenet (2.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-421834 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-421834

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-421834

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-421834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-421834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-421834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-421834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-421834

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-421834

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-421834

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-421834

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-421834

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-421834" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-421834

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-421834

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-421834

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-421834

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-421834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-421834" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21642-6020/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:41:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.83.242:8443
name: pause-298014
contexts:
- context:
cluster: pause-298014
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:41:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-298014
name: pause-298014
current-context: ""
kind: Config
users:
- name: pause-298014
user:
client-certificate: /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/pause-298014/client.crt
client-key: /home/jenkins/minikube-integration/21642-6020/.minikube/profiles/pause-298014/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-421834

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-421834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421834"

                                                
                                                
----------------------- debugLogs end: cilium-421834 [took: 3.656923841s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-421834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-421834
--- SKIP: TestNetworkPlugins/group/cilium (3.83s)

                                                
                                    
Copied to clipboard