Test Report: KVM_Linux_crio 21924

                    
                      af8f7912417d9ebc8a76a18bcb87417cd1a63b57:2025-11-19:42387
                    
                

Test fail (1/19)

Order failed test Duration
41 TestAddons/parallel/CSI 7200.054
x
+
TestAddons/parallel/CSI (7200.054s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1119 01:59:11.556959  305349 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1119 01:59:11.568460  305349 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1119 01:59:11.568490  305349 kapi.go:107] duration metric: took 11.54914ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 11.559942ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-218289 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-218289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-218289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-218289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-218289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-218289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-218289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-218289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-218289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-218289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-218289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-218289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-218289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-218289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-218289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-218289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-218289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-218289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-218289 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-218289 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [997e2dd7-7456-4c6a-aa01-cd8796e59331] Pending
helpers_test.go:352: "task-pv-pod" [997e2dd7-7456-4c6a-aa01-cd8796e59331] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [997e2dd7-7456-4c6a-aa01-cd8796e59331] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.005221163s
addons_test.go:572: (dbg) Run:  kubectl --context addons-218289 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-218289 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-218289 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-218289 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-218289 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-218289 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-218289 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-218289 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-218289 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [720e1e0b-ad86-4787-aea0-8bbccbd2857f] Pending
helpers_test.go:352: "task-pv-pod-restore" [720e1e0b-ad86-4787-aea0-8bbccbd2857f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod-restore" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:609: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod-restore" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:609: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-218289 -n addons-218289
addons_test.go:609: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-11-19 02:05:43.555154685 +0000 UTC m=+570.228278505
addons_test.go:609: (dbg) Run:  kubectl --context addons-218289 describe po task-pv-pod-restore -n default
addons_test.go:609: (dbg) kubectl --context addons-218289 describe po task-pv-pod-restore -n default:
Name:             task-pv-pod-restore
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-218289/192.168.39.195
Start Time:       Wed, 19 Nov 2025 01:59:43 +0000
Labels:           app=task-pv-pod-restore
Annotations:      <none>
Status:           Pending
IP:               10.244.0.32
IPs:
IP:  10.244.0.32
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP (http-server)
Host Port:      0/TCP (http-server)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rgbc4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc-restore
ReadOnly:   false
kube-api-access-rgbc4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-218289
Normal   Pulling    70s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     38s (x5 over 5m30s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     38s (x5 over 5m30s)  kubelet            Error: ErrImagePull
Normal   BackOff    9s (x12 over 5m29s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     9s (x12 over 5m29s)  kubelet            Error: ImagePullBackOff
addons_test.go:609: (dbg) Run:  kubectl --context addons-218289 logs task-pv-pod-restore -n default
addons_test.go:609: (dbg) Non-zero exit: kubectl --context addons-218289 logs task-pv-pod-restore -n default: exit status 1 (79.768962ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod-restore" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:609: kubectl --context addons-218289 logs task-pv-pod-restore -n default: exit status 1
addons_test.go:610: failed waiting for pod task-pv-pod-restore: app=task-pv-pod-restore within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-218289 -n addons-218289
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-218289 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-218289 logs -n 25: (1.162488521s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-571460                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-571460 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │ 19 Nov 25 01:56 UTC │
	│ start   │ --download-only -p binary-mirror-171694 --alsologtostderr --binary-mirror http://127.0.0.1:33811 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-171694 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │                     │
	│ delete  │ -p binary-mirror-171694                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-171694 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │ 19 Nov 25 01:56 UTC │
	│ addons  │ enable dashboard -p addons-218289                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-218289                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │                     │
	│ start   │ -p addons-218289 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │ 19 Nov 25 01:58 UTC │
	│ addons  │ addons-218289 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │ 19 Nov 25 01:58 UTC │
	│ addons  │ addons-218289 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │ 19 Nov 25 01:58 UTC │
	│ addons  │ enable headlamp -p addons-218289 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:58 UTC │ 19 Nov 25 01:58 UTC │
	│ addons  │ addons-218289 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:59 UTC │ 19 Nov 25 01:59 UTC │
	│ addons  │ addons-218289 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:59 UTC │ 19 Nov 25 01:59 UTC │
	│ addons  │ addons-218289 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:59 UTC │ 19 Nov 25 01:59 UTC │
	│ addons  │ addons-218289 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:59 UTC │ 19 Nov 25 01:59 UTC │
	│ ip      │ addons-218289 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:59 UTC │ 19 Nov 25 01:59 UTC │
	│ addons  │ addons-218289 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:59 UTC │ 19 Nov 25 01:59 UTC │
	│ ssh     │ addons-218289 ssh cat /opt/local-path-provisioner/pvc-a7d20d00-8ba0-4a78-b669-7766c65e6281_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:59 UTC │ 19 Nov 25 01:59 UTC │
	│ addons  │ addons-218289 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:59 UTC │ 19 Nov 25 01:59 UTC │
	│ addons  │ addons-218289 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:59 UTC │ 19 Nov 25 01:59 UTC │
	│ addons  │ addons-218289 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:59 UTC │ 19 Nov 25 01:59 UTC │
	│ ssh     │ addons-218289 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:59 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-218289                                                                                                                                                                                                                                                                                                                                                                                         │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:59 UTC │ 19 Nov 25 01:59 UTC │
	│ addons  │ addons-218289 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 01:59 UTC │ 19 Nov 25 01:59 UTC │
	│ ip      │ addons-218289 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 02:01 UTC │ 19 Nov 25 02:01 UTC │
	│ addons  │ addons-218289 addons disable ingress-dns --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 02:01 UTC │ 19 Nov 25 02:01 UTC │
	│ addons  │ addons-218289 addons disable ingress --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-218289        │ jenkins │ v1.37.0 │ 19 Nov 25 02:01 UTC │ 19 Nov 25 02:01 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 01:56:26
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 01:56:26.459864  305924 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:56:26.460152  305924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:56:26.460160  305924 out.go:374] Setting ErrFile to fd 2...
	I1119 01:56:26.460165  305924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:56:26.460359  305924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-301472/.minikube/bin
	I1119 01:56:26.460907  305924 out.go:368] Setting JSON to false
	I1119 01:56:26.461755  305924 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":34637,"bootTime":1763482749,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 01:56:26.461853  305924 start.go:143] virtualization: kvm guest
	I1119 01:56:26.463631  305924 out.go:179] * [addons-218289] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 01:56:26.464722  305924 notify.go:221] Checking for updates...
	I1119 01:56:26.464754  305924 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 01:56:26.466092  305924 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 01:56:26.467239  305924 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-301472/kubeconfig
	I1119 01:56:26.468566  305924 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-301472/.minikube
	I1119 01:56:26.469642  305924 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 01:56:26.473520  305924 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 01:56:26.474948  305924 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 01:56:26.506625  305924 out.go:179] * Using the kvm2 driver based on user configuration
	I1119 01:56:26.507734  305924 start.go:309] selected driver: kvm2
	I1119 01:56:26.507750  305924 start.go:930] validating driver "kvm2" against <nil>
	I1119 01:56:26.507763  305924 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 01:56:26.508790  305924 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 01:56:26.509115  305924 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 01:56:26.509157  305924 cni.go:84] Creating CNI manager for ""
	I1119 01:56:26.509215  305924 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1119 01:56:26.509227  305924 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1119 01:56:26.509298  305924 start.go:353] cluster config:
	{Name:addons-218289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-218289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1119 01:56:26.509423  305924 iso.go:125] acquiring lock: {Name:mkd04a343eda8a14ae76b35bb2e328c425b1a958 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 01:56:26.510896  305924 out.go:179] * Starting "addons-218289" primary control-plane node in "addons-218289" cluster
	I1119 01:56:26.511953  305924 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 01:56:26.511997  305924 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-301472/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 01:56:26.512015  305924 cache.go:65] Caching tarball of preloaded images
	I1119 01:56:26.512090  305924 preload.go:238] Found /home/jenkins/minikube-integration/21924-301472/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 01:56:26.512100  305924 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 01:56:26.512550  305924 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/config.json ...
	I1119 01:56:26.512582  305924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/config.json: {Name:mk81e9b7b1599889b03e568ac4eeb72953c85374 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:26.512771  305924 start.go:360] acquireMachinesLock for addons-218289: {Name:mk4daca6d905d1576b85a916f411f66d8444dba9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 01:56:26.512824  305924 start.go:364] duration metric: took 38.143µs to acquireMachinesLock for "addons-218289"
	I1119 01:56:26.512851  305924 start.go:93] Provisioning new machine with config: &{Name:addons-218289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-218289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 01:56:26.512901  305924 start.go:125] createHost starting for "" (driver="kvm2")
	I1119 01:56:26.514364  305924 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1119 01:56:26.514530  305924 start.go:159] libmachine.API.Create for "addons-218289" (driver="kvm2")
	I1119 01:56:26.514560  305924 client.go:173] LocalClient.Create starting
	I1119 01:56:26.514647  305924 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21924-301472/.minikube/certs/ca.pem
	I1119 01:56:26.753925  305924 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21924-301472/.minikube/certs/cert.pem
	I1119 01:56:27.008795  305924 main.go:143] libmachine: creating domain...
	I1119 01:56:27.008820  305924 main.go:143] libmachine: creating network...
	I1119 01:56:27.010587  305924 main.go:143] libmachine: found existing default network
	I1119 01:56:27.010900  305924 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1119 01:56:27.011617  305924 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001deedd0}
	I1119 01:56:27.011735  305924 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-218289</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1119 01:56:27.017725  305924 main.go:143] libmachine: creating private network mk-addons-218289 192.168.39.0/24...
	I1119 01:56:27.089525  305924 main.go:143] libmachine: private network mk-addons-218289 192.168.39.0/24 created
	I1119 01:56:27.089815  305924 main.go:143] libmachine: <network>
	  <name>mk-addons-218289</name>
	  <uuid>115f00ba-2fd1-487a-811b-27cace9443db</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:e8:03:33'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1119 01:56:27.089842  305924 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289 ...
	I1119 01:56:27.089864  305924 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21924-301472/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1119 01:56:27.089884  305924 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21924-301472/.minikube
	I1119 01:56:27.089965  305924 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21924-301472/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21924-301472/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1119 01:56:27.359180  305924 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa...
	I1119 01:56:27.629628  305924 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/addons-218289.rawdisk...
	I1119 01:56:27.629689  305924 main.go:143] libmachine: Writing magic tar header
	I1119 01:56:27.629731  305924 main.go:143] libmachine: Writing SSH key tar header
	I1119 01:56:27.629818  305924 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289 ...
	I1119 01:56:27.629888  305924 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289
	I1119 01:56:27.629922  305924 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289 (perms=drwx------)
	I1119 01:56:27.629935  305924 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21924-301472/.minikube/machines
	I1119 01:56:27.629945  305924 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21924-301472/.minikube/machines (perms=drwxr-xr-x)
	I1119 01:56:27.629958  305924 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21924-301472/.minikube
	I1119 01:56:27.629970  305924 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21924-301472/.minikube (perms=drwxr-xr-x)
	I1119 01:56:27.629981  305924 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21924-301472
	I1119 01:56:27.629992  305924 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21924-301472 (perms=drwxrwxr-x)
	I1119 01:56:27.630001  305924 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1119 01:56:27.630011  305924 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1119 01:56:27.630023  305924 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1119 01:56:27.630031  305924 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1119 01:56:27.630042  305924 main.go:143] libmachine: checking permissions on dir: /home
	I1119 01:56:27.630048  305924 main.go:143] libmachine: skipping /home - not owner
	I1119 01:56:27.630053  305924 main.go:143] libmachine: defining domain...
	I1119 01:56:27.631684  305924 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-218289</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/addons-218289.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-218289'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1119 01:56:27.637075  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:6e:6b:85 in network default
	I1119 01:56:27.637845  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:27.637869  305924 main.go:143] libmachine: starting domain...
	I1119 01:56:27.637874  305924 main.go:143] libmachine: ensuring networks are active...
	I1119 01:56:27.638710  305924 main.go:143] libmachine: Ensuring network default is active
	I1119 01:56:27.639069  305924 main.go:143] libmachine: Ensuring network mk-addons-218289 is active
	I1119 01:56:27.639649  305924 main.go:143] libmachine: getting domain XML...
	I1119 01:56:27.640772  305924 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-218289</name>
	  <uuid>02a5de5c-fd4a-4c92-8903-09ec498225b5</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/addons-218289.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:b3:2c:30'/>
	      <source network='mk-addons-218289'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:6e:6b:85'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 01:56:28.928752  305924 main.go:143] libmachine: waiting for domain to start...
	I1119 01:56:28.930377  305924 main.go:143] libmachine: domain is now running
	I1119 01:56:28.930400  305924 main.go:143] libmachine: waiting for IP...
	I1119 01:56:28.931354  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:28.932028  305924 main.go:143] libmachine: no network interface addresses found for domain addons-218289 (source=lease)
	I1119 01:56:28.932046  305924 main.go:143] libmachine: trying to list again with source=arp
	I1119 01:56:28.932383  305924 main.go:143] libmachine: unable to find current IP address of domain addons-218289 in network mk-addons-218289 (interfaces detected: [])
	I1119 01:56:28.932445  305924 retry.go:31] will retry after 287.248889ms: waiting for domain to come up
	I1119 01:56:29.221107  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:29.221784  305924 main.go:143] libmachine: no network interface addresses found for domain addons-218289 (source=lease)
	I1119 01:56:29.221800  305924 main.go:143] libmachine: trying to list again with source=arp
	I1119 01:56:29.222115  305924 main.go:143] libmachine: unable to find current IP address of domain addons-218289 in network mk-addons-218289 (interfaces detected: [])
	I1119 01:56:29.222152  305924 retry.go:31] will retry after 255.499632ms: waiting for domain to come up
	I1119 01:56:29.479949  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:29.480809  305924 main.go:143] libmachine: no network interface addresses found for domain addons-218289 (source=lease)
	I1119 01:56:29.480834  305924 main.go:143] libmachine: trying to list again with source=arp
	I1119 01:56:29.481184  305924 main.go:143] libmachine: unable to find current IP address of domain addons-218289 in network mk-addons-218289 (interfaces detected: [])
	I1119 01:56:29.481233  305924 retry.go:31] will retry after 402.006439ms: waiting for domain to come up
	I1119 01:56:29.884887  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:29.885537  305924 main.go:143] libmachine: no network interface addresses found for domain addons-218289 (source=lease)
	I1119 01:56:29.885553  305924 main.go:143] libmachine: trying to list again with source=arp
	I1119 01:56:29.885829  305924 main.go:143] libmachine: unable to find current IP address of domain addons-218289 in network mk-addons-218289 (interfaces detected: [])
	I1119 01:56:29.885870  305924 retry.go:31] will retry after 422.861388ms: waiting for domain to come up
	I1119 01:56:30.310452  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:30.311082  305924 main.go:143] libmachine: no network interface addresses found for domain addons-218289 (source=lease)
	I1119 01:56:30.311096  305924 main.go:143] libmachine: trying to list again with source=arp
	I1119 01:56:30.311461  305924 main.go:143] libmachine: unable to find current IP address of domain addons-218289 in network mk-addons-218289 (interfaces detected: [])
	I1119 01:56:30.311507  305924 retry.go:31] will retry after 753.029988ms: waiting for domain to come up
	I1119 01:56:31.066572  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:31.067286  305924 main.go:143] libmachine: no network interface addresses found for domain addons-218289 (source=lease)
	I1119 01:56:31.067305  305924 main.go:143] libmachine: trying to list again with source=arp
	I1119 01:56:31.067690  305924 main.go:143] libmachine: unable to find current IP address of domain addons-218289 in network mk-addons-218289 (interfaces detected: [])
	I1119 01:56:31.067731  305924 retry.go:31] will retry after 600.186931ms: waiting for domain to come up
	I1119 01:56:31.669736  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:31.670412  305924 main.go:143] libmachine: no network interface addresses found for domain addons-218289 (source=lease)
	I1119 01:56:31.670434  305924 main.go:143] libmachine: trying to list again with source=arp
	I1119 01:56:31.670765  305924 main.go:143] libmachine: unable to find current IP address of domain addons-218289 in network mk-addons-218289 (interfaces detected: [])
	I1119 01:56:31.670817  305924 retry.go:31] will retry after 1.130696301s: waiting for domain to come up
	I1119 01:56:32.803107  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:32.803788  305924 main.go:143] libmachine: no network interface addresses found for domain addons-218289 (source=lease)
	I1119 01:56:32.803811  305924 main.go:143] libmachine: trying to list again with source=arp
	I1119 01:56:32.804098  305924 main.go:143] libmachine: unable to find current IP address of domain addons-218289 in network mk-addons-218289 (interfaces detected: [])
	I1119 01:56:32.804138  305924 retry.go:31] will retry after 1.055785333s: waiting for domain to come up
	I1119 01:56:33.861189  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:33.861799  305924 main.go:143] libmachine: no network interface addresses found for domain addons-218289 (source=lease)
	I1119 01:56:33.861819  305924 main.go:143] libmachine: trying to list again with source=arp
	I1119 01:56:33.862212  305924 main.go:143] libmachine: unable to find current IP address of domain addons-218289 in network mk-addons-218289 (interfaces detected: [])
	I1119 01:56:33.862267  305924 retry.go:31] will retry after 1.496835415s: waiting for domain to come up
	I1119 01:56:35.360923  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:35.361503  305924 main.go:143] libmachine: no network interface addresses found for domain addons-218289 (source=lease)
	I1119 01:56:35.361520  305924 main.go:143] libmachine: trying to list again with source=arp
	I1119 01:56:35.361850  305924 main.go:143] libmachine: unable to find current IP address of domain addons-218289 in network mk-addons-218289 (interfaces detected: [])
	I1119 01:56:35.361889  305924 retry.go:31] will retry after 2.2474044s: waiting for domain to come up
	I1119 01:56:37.611806  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:37.612775  305924 main.go:143] libmachine: no network interface addresses found for domain addons-218289 (source=lease)
	I1119 01:56:37.612796  305924 main.go:143] libmachine: trying to list again with source=arp
	I1119 01:56:37.613352  305924 main.go:143] libmachine: unable to find current IP address of domain addons-218289 in network mk-addons-218289 (interfaces detected: [])
	I1119 01:56:37.613398  305924 retry.go:31] will retry after 2.275377198s: waiting for domain to come up
	I1119 01:56:39.890076  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:39.890904  305924 main.go:143] libmachine: no network interface addresses found for domain addons-218289 (source=lease)
	I1119 01:56:39.890925  305924 main.go:143] libmachine: trying to list again with source=arp
	I1119 01:56:39.891312  305924 main.go:143] libmachine: unable to find current IP address of domain addons-218289 in network mk-addons-218289 (interfaces detected: [])
	I1119 01:56:39.891359  305924 retry.go:31] will retry after 3.110879612s: waiting for domain to come up
	I1119 01:56:43.004483  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:43.005219  305924 main.go:143] libmachine: no network interface addresses found for domain addons-218289 (source=lease)
	I1119 01:56:43.005240  305924 main.go:143] libmachine: trying to list again with source=arp
	I1119 01:56:43.005705  305924 main.go:143] libmachine: unable to find current IP address of domain addons-218289 in network mk-addons-218289 (interfaces detected: [])
	I1119 01:56:43.005751  305924 retry.go:31] will retry after 4.273841577s: waiting for domain to come up
	I1119 01:56:47.281044  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:47.281721  305924 main.go:143] libmachine: domain addons-218289 has current primary IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:47.281741  305924 main.go:143] libmachine: found domain IP: 192.168.39.195
	I1119 01:56:47.281749  305924 main.go:143] libmachine: reserving static IP address...
	I1119 01:56:47.282198  305924 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-218289", mac: "52:54:00:b3:2c:30", ip: "192.168.39.195"} in network mk-addons-218289
	I1119 01:56:47.515199  305924 main.go:143] libmachine: reserved static IP address 192.168.39.195 for domain addons-218289
	I1119 01:56:47.515266  305924 main.go:143] libmachine: waiting for SSH...
	I1119 01:56:47.515300  305924 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 01:56:47.518658  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:47.519115  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b3:2c:30}
	I1119 01:56:47.519149  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:47.519455  305924 main.go:143] libmachine: Using SSH client type: native
	I1119 01:56:47.519727  305924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1119 01:56:47.519741  305924 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 01:56:47.624714  305924 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 01:56:47.625159  305924 main.go:143] libmachine: domain creation complete
	I1119 01:56:47.626830  305924 machine.go:94] provisionDockerMachine start ...
	I1119 01:56:47.629392  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:47.629733  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:56:47.629756  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:47.629905  305924 main.go:143] libmachine: Using SSH client type: native
	I1119 01:56:47.630119  305924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1119 01:56:47.630132  305924 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 01:56:47.733389  305924 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 01:56:47.733426  305924 buildroot.go:166] provisioning hostname "addons-218289"
	I1119 01:56:47.736390  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:47.736851  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:56:47.736876  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:47.737101  305924 main.go:143] libmachine: Using SSH client type: native
	I1119 01:56:47.737337  305924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1119 01:56:47.737349  305924 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-218289 && echo "addons-218289" | sudo tee /etc/hostname
	I1119 01:56:47.859856  305924 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-218289
	
	I1119 01:56:47.862982  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:47.863465  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:56:47.863494  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:47.863730  305924 main.go:143] libmachine: Using SSH client type: native
	I1119 01:56:47.863990  305924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1119 01:56:47.864011  305924 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-218289' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-218289/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-218289' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 01:56:47.977108  305924 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 01:56:47.977172  305924 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21924-301472/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-301472/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-301472/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-301472/.minikube}
	I1119 01:56:47.977225  305924 buildroot.go:174] setting up certificates
	I1119 01:56:47.977239  305924 provision.go:84] configureAuth start
	I1119 01:56:47.980833  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:47.981307  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:56:47.981336  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:47.983857  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:47.984289  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:56:47.984329  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:47.984478  305924 provision.go:143] copyHostCerts
	I1119 01:56:47.984572  305924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-301472/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-301472/.minikube/key.pem (1675 bytes)
	I1119 01:56:47.984715  305924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-301472/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-301472/.minikube/ca.pem (1078 bytes)
	I1119 01:56:47.984800  305924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-301472/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-301472/.minikube/cert.pem (1123 bytes)
	I1119 01:56:47.984869  305924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-301472/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-301472/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-301472/.minikube/certs/ca-key.pem org=jenkins.addons-218289 san=[127.0.0.1 192.168.39.195 addons-218289 localhost minikube]
	I1119 01:56:48.047148  305924 provision.go:177] copyRemoteCerts
	I1119 01:56:48.047236  305924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 01:56:48.049930  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:48.050339  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:56:48.050378  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:48.050552  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	I1119 01:56:48.133077  305924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-301472/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 01:56:48.164492  305924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-301472/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 01:56:48.196034  305924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-301472/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 01:56:48.226788  305924 provision.go:87] duration metric: took 249.532569ms to configureAuth
	I1119 01:56:48.226826  305924 buildroot.go:189] setting minikube options for container-runtime
	I1119 01:56:48.227083  305924 config.go:182] Loaded profile config "addons-218289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:56:48.230240  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:48.230728  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:56:48.230754  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:48.230969  305924 main.go:143] libmachine: Using SSH client type: native
	I1119 01:56:48.231220  305924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1119 01:56:48.231241  305924 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 01:56:48.487031  305924 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 01:56:48.487068  305924 machine.go:97] duration metric: took 860.21519ms to provisionDockerMachine
	I1119 01:56:48.487086  305924 client.go:176] duration metric: took 21.972517442s to LocalClient.Create
	I1119 01:56:48.487104  305924 start.go:167] duration metric: took 21.972574164s to libmachine.API.Create "addons-218289"
	I1119 01:56:48.487114  305924 start.go:293] postStartSetup for "addons-218289" (driver="kvm2")
	I1119 01:56:48.487127  305924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 01:56:48.487219  305924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 01:56:48.489926  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:48.490393  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:56:48.490430  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:48.490602  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	I1119 01:56:48.575800  305924 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 01:56:48.581206  305924 info.go:137] Remote host: Buildroot 2025.02
	I1119 01:56:48.581264  305924 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-301472/.minikube/addons for local assets ...
	I1119 01:56:48.581370  305924 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-301472/.minikube/files for local assets ...
	I1119 01:56:48.581409  305924 start.go:296] duration metric: took 94.286758ms for postStartSetup
	I1119 01:56:48.584759  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:48.585208  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:56:48.585232  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:48.585550  305924 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/config.json ...
	I1119 01:56:48.585754  305924 start.go:128] duration metric: took 22.072842179s to createHost
	I1119 01:56:48.587906  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:48.588302  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:56:48.588326  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:48.588499  305924 main.go:143] libmachine: Using SSH client type: native
	I1119 01:56:48.588732  305924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1119 01:56:48.588744  305924 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 01:56:48.692098  305924 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763517408.666152713
	
	I1119 01:56:48.692153  305924 fix.go:216] guest clock: 1763517408.666152713
	I1119 01:56:48.692179  305924 fix.go:229] Guest: 2025-11-19 01:56:48.666152713 +0000 UTC Remote: 2025-11-19 01:56:48.585767864 +0000 UTC m=+22.175938443 (delta=80.384849ms)
	I1119 01:56:48.692206  305924 fix.go:200] guest clock delta is within tolerance: 80.384849ms
	I1119 01:56:48.692214  305924 start.go:83] releasing machines lock for "addons-218289", held for 22.179379612s
	I1119 01:56:48.695055  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:48.695481  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:56:48.695539  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:48.696122  305924 ssh_runner.go:195] Run: cat /version.json
	I1119 01:56:48.696193  305924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 01:56:48.699207  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:48.699547  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:48.699665  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:56:48.699700  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:48.699863  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	I1119 01:56:48.700108  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:56:48.700143  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:48.700358  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	I1119 01:56:48.776320  305924 ssh_runner.go:195] Run: systemctl --version
	I1119 01:56:48.801997  305924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 01:56:48.963286  305924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 01:56:48.971141  305924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 01:56:48.971239  305924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 01:56:48.993288  305924 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 01:56:48.993321  305924 start.go:496] detecting cgroup driver to use...
	I1119 01:56:48.993392  305924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 01:56:49.017318  305924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 01:56:49.037392  305924 docker.go:218] disabling cri-docker service (if available) ...
	I1119 01:56:49.037458  305924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 01:56:49.056492  305924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 01:56:49.074005  305924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 01:56:49.226496  305924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 01:56:49.436342  305924 docker.go:234] disabling docker service ...
	I1119 01:56:49.436430  305924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 01:56:49.453309  305924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 01:56:49.469421  305924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 01:56:49.629925  305924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 01:56:49.776317  305924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 01:56:49.793631  305924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 01:56:49.818385  305924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 01:56:49.818466  305924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:49.832015  305924 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 01:56:49.832083  305924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:49.845167  305924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:49.858063  305924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:49.871694  305924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 01:56:49.885208  305924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:49.898395  305924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:49.920582  305924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:56:49.933953  305924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 01:56:49.945090  305924 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 01:56:49.945167  305924 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 01:56:49.966176  305924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 01:56:49.979101  305924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 01:56:50.118853  305924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 01:56:50.240709  305924 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 01:56:50.240829  305924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 01:56:50.246819  305924 start.go:564] Will wait 60s for crictl version
	I1119 01:56:50.246901  305924 ssh_runner.go:195] Run: which crictl
	I1119 01:56:50.251475  305924 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 01:56:50.291892  305924 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 01:56:50.292009  305924 ssh_runner.go:195] Run: crio --version
	I1119 01:56:50.329925  305924 ssh_runner.go:195] Run: crio --version
	I1119 01:56:50.376961  305924 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 01:56:50.383969  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:50.384410  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:56:50.384438  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:56:50.384670  305924 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 01:56:50.390581  305924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 01:56:50.407458  305924 kubeadm.go:884] updating cluster {Name:addons-218289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-218289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 01:56:50.407602  305924 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 01:56:50.407654  305924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 01:56:50.439134  305924 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1119 01:56:50.439216  305924 ssh_runner.go:195] Run: which lz4
	I1119 01:56:50.443661  305924 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1119 01:56:50.449616  305924 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1119 01:56:50.449656  305924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-301472/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1119 01:56:52.024936  305924 crio.go:462] duration metric: took 1.581297927s to copy over tarball
	I1119 01:56:52.025033  305924 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1119 01:56:53.670082  305924 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.645015206s)
	I1119 01:56:53.670117  305924 crio.go:469] duration metric: took 1.645144978s to extract the tarball
	I1119 01:56:53.670127  305924 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1119 01:56:53.711461  305924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 01:56:53.755761  305924 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 01:56:53.755792  305924 cache_images.go:86] Images are preloaded, skipping loading
	I1119 01:56:53.755802  305924 kubeadm.go:935] updating node { 192.168.39.195 8443 v1.34.1 crio true true} ...
	I1119 01:56:53.755930  305924 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-218289 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-218289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 01:56:53.756039  305924 ssh_runner.go:195] Run: crio config
	I1119 01:56:53.806466  305924 cni.go:84] Creating CNI manager for ""
	I1119 01:56:53.806500  305924 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1119 01:56:53.806527  305924 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 01:56:53.806564  305924 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-218289 NodeName:addons-218289 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 01:56:53.806727  305924 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-218289"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.195"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 01:56:53.806827  305924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 01:56:53.821057  305924 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 01:56:53.821205  305924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 01:56:53.835492  305924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1119 01:56:53.858331  305924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 01:56:53.880849  305924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1119 01:56:53.902618  305924 ssh_runner.go:195] Run: grep 192.168.39.195	control-plane.minikube.internal$ /etc/hosts
	I1119 01:56:53.907750  305924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 01:56:53.923369  305924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 01:56:54.062958  305924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 01:56:54.099769  305924 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289 for IP: 192.168.39.195
	I1119 01:56:54.099794  305924 certs.go:195] generating shared ca certs ...
	I1119 01:56:54.099815  305924 certs.go:227] acquiring lock for ca certs: {Name:mk471a9d5979576fff4523a3f342afaeda275da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:54.100043  305924 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-301472/.minikube/ca.key
	I1119 01:56:54.719277  305924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-301472/.minikube/ca.crt ...
	I1119 01:56:54.719315  305924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-301472/.minikube/ca.crt: {Name:mk6b208d174b3529986ae1d83fb7f61ece41ad0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:54.719534  305924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-301472/.minikube/ca.key ...
	I1119 01:56:54.719552  305924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-301472/.minikube/ca.key: {Name:mk33d0add95dabda1ad540d8046e06353ff9dc8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:54.719662  305924 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-301472/.minikube/proxy-client-ca.key
	I1119 01:56:54.901828  305924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-301472/.minikube/proxy-client-ca.crt ...
	I1119 01:56:54.901865  305924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-301472/.minikube/proxy-client-ca.crt: {Name:mk1b959a3ddc32c8368272e4b2e25cef6c724463 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:54.902083  305924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-301472/.minikube/proxy-client-ca.key ...
	I1119 01:56:54.902107  305924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-301472/.minikube/proxy-client-ca.key: {Name:mkbef054ca969af95af19af3c3c0ed5dce9f287e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:54.902222  305924 certs.go:257] generating profile certs ...
	I1119 01:56:54.902333  305924 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/client.key
	I1119 01:56:54.902368  305924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/client.crt with IP's: []
	I1119 01:56:55.069517  305924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/client.crt ...
	I1119 01:56:55.069554  305924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/client.crt: {Name:mka5450d9c5fa1684dd71f60a0ef09558d5d8bba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:55.069771  305924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/client.key ...
	I1119 01:56:55.069787  305924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/client.key: {Name:mkaf06bdc74597b38847ec0624a1afdef18e2dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:55.069899  305924 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/apiserver.key.7c119060
	I1119 01:56:55.069924  305924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/apiserver.crt.7c119060 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195]
	I1119 01:56:55.234991  305924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/apiserver.crt.7c119060 ...
	I1119 01:56:55.235028  305924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/apiserver.crt.7c119060: {Name:mke153c1fc6f1cce1c83198b9644f3525d6c417c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:55.235232  305924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/apiserver.key.7c119060 ...
	I1119 01:56:55.235268  305924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/apiserver.key.7c119060: {Name:mk38b893d484babd54cf8b592fdbdf2d626b9237 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:55.235378  305924 certs.go:382] copying /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/apiserver.crt.7c119060 -> /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/apiserver.crt
	I1119 01:56:55.235479  305924 certs.go:386] copying /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/apiserver.key.7c119060 -> /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/apiserver.key
	I1119 01:56:55.235556  305924 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/proxy-client.key
	I1119 01:56:55.235585  305924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/proxy-client.crt with IP's: []
	I1119 01:56:55.341440  305924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/proxy-client.crt ...
	I1119 01:56:55.341477  305924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/proxy-client.crt: {Name:mk27a2506bceea715d50cc63348cb9731e9862bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:55.341682  305924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/proxy-client.key ...
	I1119 01:56:55.341702  305924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/proxy-client.key: {Name:mk76e0b1cd3c9da2ed62dd7a275164b6730a7f07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:56:55.341947  305924 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-301472/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 01:56:55.342007  305924 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-301472/.minikube/certs/ca.pem (1078 bytes)
	I1119 01:56:55.342044  305924 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-301472/.minikube/certs/cert.pem (1123 bytes)
	I1119 01:56:55.342078  305924 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-301472/.minikube/certs/key.pem (1675 bytes)
	I1119 01:56:55.342673  305924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-301472/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 01:56:55.376073  305924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-301472/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 01:56:55.414038  305924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-301472/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 01:56:55.447074  305924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-301472/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 01:56:55.478502  305924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1119 01:56:55.509481  305924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 01:56:55.541173  305924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 01:56:55.573586  305924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-301472/.minikube/profiles/addons-218289/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 01:56:55.606380  305924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-301472/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 01:56:55.639260  305924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 01:56:55.660401  305924 ssh_runner.go:195] Run: openssl version
	I1119 01:56:55.667184  305924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 01:56:55.680712  305924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 01:56:55.685978  305924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1119 01:56:55.686046  305924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 01:56:55.693652  305924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 01:56:55.711004  305924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 01:56:55.717754  305924 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 01:56:55.717817  305924 kubeadm.go:401] StartCluster: {Name:addons-218289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-218289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 01:56:55.717909  305924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 01:56:55.717983  305924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 01:56:55.763440  305924 cri.go:89] found id: ""
	I1119 01:56:55.763535  305924 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 01:56:55.779753  305924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 01:56:55.793197  305924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 01:56:55.806445  305924 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 01:56:55.806477  305924 kubeadm.go:158] found existing configuration files:
	
	I1119 01:56:55.806543  305924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 01:56:55.818836  305924 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 01:56:55.818946  305924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 01:56:55.831782  305924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 01:56:55.844413  305924 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 01:56:55.844498  305924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 01:56:55.857092  305924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 01:56:55.869102  305924 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 01:56:55.869180  305924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 01:56:55.881878  305924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 01:56:55.893571  305924 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 01:56:55.893641  305924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 01:56:55.906838  305924 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1119 01:56:55.962551  305924 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 01:56:55.962640  305924 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 01:56:56.069686  305924 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 01:56:56.069834  305924 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 01:56:56.069990  305924 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 01:56:56.082731  305924 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 01:56:56.204106  305924 out.go:252]   - Generating certificates and keys ...
	I1119 01:56:56.204236  305924 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 01:56:56.204323  305924 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 01:56:56.204398  305924 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 01:56:56.212138  305924 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 01:56:56.855476  305924 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 01:56:57.025974  305924 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 01:56:57.176857  305924 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 01:56:57.177173  305924 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-218289 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I1119 01:56:57.470932  305924 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 01:56:57.471088  305924 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-218289 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I1119 01:56:57.896476  305924 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 01:56:58.398326  305924 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 01:56:58.769822  305924 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 01:56:58.770071  305924 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 01:56:59.022381  305924 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 01:56:59.334081  305924 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 01:56:59.362106  305924 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 01:56:59.602661  305924 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 01:56:59.667434  305924 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 01:56:59.669348  305924 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 01:56:59.671713  305924 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 01:56:59.674384  305924 out.go:252]   - Booting up control plane ...
	I1119 01:56:59.674514  305924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 01:56:59.674634  305924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 01:56:59.674829  305924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 01:56:59.693641  305924 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 01:56:59.693740  305924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 01:56:59.700964  305924 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 01:56:59.701842  305924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 01:56:59.702307  305924 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 01:56:59.864676  305924 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 01:56:59.864836  305924 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 01:57:00.864987  305924 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001618618s
	I1119 01:57:00.869302  305924 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 01:57:00.869425  305924 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.195:8443/livez
	I1119 01:57:00.869578  305924 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 01:57:00.869708  305924 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 01:57:03.317291  305924 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.448592114s
	I1119 01:57:05.237043  305924 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.369915617s
	I1119 01:57:06.869992  305924 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.003503828s
	I1119 01:57:06.891682  305924 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 01:57:06.922603  305924 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 01:57:06.954323  305924 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 01:57:06.954535  305924 kubeadm.go:319] [mark-control-plane] Marking the node addons-218289 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 01:57:06.970752  305924 kubeadm.go:319] [bootstrap-token] Using token: 0eu25d.014rjuq80nga9d8x
	I1119 01:57:06.972199  305924 out.go:252]   - Configuring RBAC rules ...
	I1119 01:57:06.972372  305924 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 01:57:06.984808  305924 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 01:57:06.997045  305924 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 01:57:07.003110  305924 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 01:57:07.007842  305924 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 01:57:07.017926  305924 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 01:57:07.283903  305924 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 01:57:07.728096  305924 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 01:57:08.281492  305924 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 01:57:08.282524  305924 kubeadm.go:319] 
	I1119 01:57:08.282600  305924 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 01:57:08.282610  305924 kubeadm.go:319] 
	I1119 01:57:08.282756  305924 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 01:57:08.282789  305924 kubeadm.go:319] 
	I1119 01:57:08.282835  305924 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 01:57:08.282921  305924 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 01:57:08.282998  305924 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 01:57:08.283012  305924 kubeadm.go:319] 
	I1119 01:57:08.283086  305924 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 01:57:08.283097  305924 kubeadm.go:319] 
	I1119 01:57:08.283165  305924 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 01:57:08.283176  305924 kubeadm.go:319] 
	I1119 01:57:08.283281  305924 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 01:57:08.283397  305924 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 01:57:08.283497  305924 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 01:57:08.283509  305924 kubeadm.go:319] 
	I1119 01:57:08.283631  305924 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 01:57:08.283748  305924 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 01:57:08.283837  305924 kubeadm.go:319] 
	I1119 01:57:08.283982  305924 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0eu25d.014rjuq80nga9d8x \
	I1119 01:57:08.284126  305924 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:964c6ab079f7adaa9b3569b10d6158135c8c3bc95ef1624f001bb3957d4c5e75 \
	I1119 01:57:08.284159  305924 kubeadm.go:319] 	--control-plane 
	I1119 01:57:08.284168  305924 kubeadm.go:319] 
	I1119 01:57:08.284285  305924 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 01:57:08.284300  305924 kubeadm.go:319] 
	I1119 01:57:08.284421  305924 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0eu25d.014rjuq80nga9d8x \
	I1119 01:57:08.284551  305924 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:964c6ab079f7adaa9b3569b10d6158135c8c3bc95ef1624f001bb3957d4c5e75 
	I1119 01:57:08.286430  305924 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 01:57:08.286451  305924 cni.go:84] Creating CNI manager for ""
	I1119 01:57:08.286462  305924 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1119 01:57:08.289049  305924 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1119 01:57:08.290395  305924 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1119 01:57:08.304476  305924 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1119 01:57:08.331946  305924 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 01:57:08.332021  305924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:08.332085  305924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-218289 minikube.k8s.io/updated_at=2025_11_19T01_57_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=addons-218289 minikube.k8s.io/primary=true
	I1119 01:57:08.373776  305924 ops.go:34] apiserver oom_adj: -16
	I1119 01:57:08.491342  305924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:08.991659  305924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:09.492242  305924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:09.992101  305924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:10.492441  305924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:10.992405  305924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:11.491695  305924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:11.991530  305924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:57:12.074486  305924 kubeadm.go:1114] duration metric: took 3.742532184s to wait for elevateKubeSystemPrivileges
	I1119 01:57:12.074528  305924 kubeadm.go:403] duration metric: took 16.356717642s to StartCluster
	I1119 01:57:12.074571  305924 settings.go:142] acquiring lock: {Name:mk262b1565599e10b165cc4beba3dd6e17a18933 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:57:12.074732  305924 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-301472/kubeconfig
	I1119 01:57:12.075158  305924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-301472/kubeconfig: {Name:mk1855c1f9209cc750154364e307c62a09606e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:57:12.075461  305924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 01:57:12.075486  305924 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1119 01:57:12.075461  305924 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 01:57:12.075611  305924 addons.go:70] Setting yakd=true in profile "addons-218289"
	I1119 01:57:12.075662  305924 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-218289"
	I1119 01:57:12.075668  305924 addons.go:70] Setting ingress=true in profile "addons-218289"
	I1119 01:57:12.075692  305924 addons.go:239] Setting addon ingress=true in "addons-218289"
	I1119 01:57:12.075706  305924 config.go:182] Loaded profile config "addons-218289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:57:12.075716  305924 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-218289"
	I1119 01:57:12.075750  305924 host.go:66] Checking if "addons-218289" exists ...
	I1119 01:57:12.075752  305924 addons.go:70] Setting default-storageclass=true in profile "addons-218289"
	I1119 01:57:12.075759  305924 addons.go:70] Setting inspektor-gadget=true in profile "addons-218289"
	I1119 01:57:12.075767  305924 addons.go:239] Setting addon inspektor-gadget=true in "addons-218289"
	I1119 01:57:12.075768  305924 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-218289"
	I1119 01:57:12.075777  305924 host.go:66] Checking if "addons-218289" exists ...
	I1119 01:57:12.075809  305924 addons.go:70] Setting metrics-server=true in profile "addons-218289"
	I1119 01:57:12.075830  305924 addons.go:239] Setting addon metrics-server=true in "addons-218289"
	I1119 01:57:12.075861  305924 host.go:66] Checking if "addons-218289" exists ...
	I1119 01:57:12.075860  305924 addons.go:70] Setting ingress-dns=true in profile "addons-218289"
	I1119 01:57:12.075888  305924 addons.go:239] Setting addon ingress-dns=true in "addons-218289"
	I1119 01:57:12.075874  305924 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-218289"
	I1119 01:57:12.075921  305924 host.go:66] Checking if "addons-218289" exists ...
	I1119 01:57:12.075920  305924 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-218289"
	I1119 01:57:12.075962  305924 host.go:66] Checking if "addons-218289" exists ...
	I1119 01:57:12.075633  305924 addons.go:239] Setting addon yakd=true in "addons-218289"
	I1119 01:57:12.076282  305924 host.go:66] Checking if "addons-218289" exists ...
	I1119 01:57:12.075673  305924 addons.go:70] Setting volumesnapshots=true in profile "addons-218289"
	I1119 01:57:12.076343  305924 addons.go:239] Setting addon volumesnapshots=true in "addons-218289"
	I1119 01:57:12.076370  305924 host.go:66] Checking if "addons-218289" exists ...
	I1119 01:57:12.075666  305924 addons.go:70] Setting volcano=true in profile "addons-218289"
	I1119 01:57:12.076651  305924 addons.go:239] Setting addon volcano=true in "addons-218289"
	I1119 01:57:12.076699  305924 host.go:66] Checking if "addons-218289" exists ...
	I1119 01:57:12.075641  305924 addons.go:70] Setting registry-creds=true in profile "addons-218289"
	I1119 01:57:12.076989  305924 addons.go:239] Setting addon registry-creds=true in "addons-218289"
	I1119 01:57:12.077019  305924 host.go:66] Checking if "addons-218289" exists ...
	I1119 01:57:12.075643  305924 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-218289"
	I1119 01:57:12.077100  305924 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-218289"
	I1119 01:57:12.077123  305924 host.go:66] Checking if "addons-218289" exists ...
	I1119 01:57:12.075651  305924 addons.go:70] Setting storage-provisioner=true in profile "addons-218289"
	I1119 01:57:12.077171  305924 addons.go:239] Setting addon storage-provisioner=true in "addons-218289"
	I1119 01:57:12.077202  305924 host.go:66] Checking if "addons-218289" exists ...
	I1119 01:57:12.075625  305924 addons.go:70] Setting gcp-auth=true in profile "addons-218289"
	I1119 01:57:12.077515  305924 mustload.go:66] Loading cluster: addons-218289
	I1119 01:57:12.075750  305924 host.go:66] Checking if "addons-218289" exists ...
	I1119 01:57:12.077709  305924 config.go:182] Loaded profile config "addons-218289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:57:12.075659  305924 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-218289"
	I1119 01:57:12.075651  305924 addons.go:70] Setting cloud-spanner=true in profile "addons-218289"
	I1119 01:57:12.078039  305924 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-218289"
	I1119 01:57:12.078056  305924 addons.go:239] Setting addon cloud-spanner=true in "addons-218289"
	I1119 01:57:12.078087  305924 host.go:66] Checking if "addons-218289" exists ...
	I1119 01:57:12.075638  305924 addons.go:70] Setting registry=true in profile "addons-218289"
	I1119 01:57:12.078460  305924 addons.go:239] Setting addon registry=true in "addons-218289"
	I1119 01:57:12.078498  305924 host.go:66] Checking if "addons-218289" exists ...
	I1119 01:57:12.078415  305924 out.go:179] * Verifying Kubernetes components...
	I1119 01:57:12.080123  305924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 01:57:12.083149  305924 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1119 01:57:12.083166  305924 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1119 01:57:12.083259  305924 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 01:57:12.083485  305924 addons.go:239] Setting addon default-storageclass=true in "addons-218289"
	I1119 01:57:12.084236  305924 host.go:66] Checking if "addons-218289" exists ...
	I1119 01:57:12.084592  305924 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1119 01:57:12.084610  305924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	W1119 01:57:12.085180  305924 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1119 01:57:12.085990  305924 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1119 01:57:12.086010  305924 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1119 01:57:12.086428  305924 host.go:66] Checking if "addons-218289" exists ...
	I1119 01:57:12.086719  305924 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1119 01:57:12.086730  305924 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1119 01:57:12.086757  305924 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1119 01:57:12.086725  305924 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1119 01:57:12.086743  305924 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1119 01:57:12.086766  305924 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1119 01:57:12.087903  305924 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-218289"
	I1119 01:57:12.088162  305924 host.go:66] Checking if "addons-218289" exists ...
	I1119 01:57:12.088177  305924 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 01:57:12.088210  305924 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1119 01:57:12.088621  305924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1119 01:57:12.088210  305924 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1119 01:57:12.088821  305924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1119 01:57:12.088212  305924 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 01:57:12.088986  305924 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 01:57:12.089008  305924 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 01:57:12.088987  305924 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1119 01:57:12.089066  305924 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1119 01:57:12.089732  305924 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1119 01:57:12.089738  305924 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1119 01:57:12.089760  305924 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1119 01:57:12.089761  305924 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1119 01:57:12.089766  305924 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 01:57:12.089771  305924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1119 01:57:12.089772  305924 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1119 01:57:12.089780  305924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 01:57:12.089785  305924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1119 01:57:12.089868  305924 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1119 01:57:12.089890  305924 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1119 01:57:12.091162  305924 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1119 01:57:12.091180  305924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1119 01:57:12.091884  305924 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1119 01:57:12.093128  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.093602  305924 out.go:179]   - Using image docker.io/registry:3.0.0
	I1119 01:57:12.093606  305924 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1119 01:57:12.093759  305924 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1119 01:57:12.093776  305924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1119 01:57:12.093609  305924 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1119 01:57:12.094540  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.094662  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:57:12.094705  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.094827  305924 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1119 01:57:12.094842  305924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1119 01:57:12.095534  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	I1119 01:57:12.096479  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:57:12.096519  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.096664  305924 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1119 01:57:12.097281  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	I1119 01:57:12.097407  305924 out.go:179]   - Using image docker.io/busybox:stable
	I1119 01:57:12.098694  305924 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1119 01:57:12.098929  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.099011  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.099131  305924 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1119 01:57:12.099144  305924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1119 01:57:12.099823  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.100340  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:57:12.100381  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.100924  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:57:12.100954  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.101046  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.101094  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.101303  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	I1119 01:57:12.101400  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.101832  305924 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1119 01:57:12.101917  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	I1119 01:57:12.101998  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:57:12.102028  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.102591  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.103093  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	I1119 01:57:12.103143  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.103450  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:57:12.103483  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.103488  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:57:12.103522  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.103526  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:57:12.103556  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.103834  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.103858  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	I1119 01:57:12.103889  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	I1119 01:57:12.103897  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	I1119 01:57:12.104372  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:57:12.104386  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:57:12.104407  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.104413  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.104481  305924 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1119 01:57:12.104761  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:57:12.104790  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	I1119 01:57:12.104797  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.104861  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	I1119 01:57:12.105182  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.105227  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	I1119 01:57:12.105873  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:57:12.105904  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.106085  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	I1119 01:57:12.106134  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.106552  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:57:12.106582  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.106766  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	I1119 01:57:12.107081  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.107267  305924 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1119 01:57:12.107526  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:57:12.107564  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.107704  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	I1119 01:57:12.109770  305924 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1119 01:57:12.111096  305924 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1119 01:57:12.111109  305924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1119 01:57:12.113211  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.113527  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:57:12.113548  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:12.113672  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	W1119 01:57:12.289171  305924 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:55880->192.168.39.195:22: read: connection reset by peer
	I1119 01:57:12.289212  305924 retry.go:31] will retry after 301.390811ms: ssh: handshake failed: read tcp 192.168.39.1:55880->192.168.39.195:22: read: connection reset by peer
	W1119 01:57:12.333787  305924 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:55912->192.168.39.195:22: read: connection reset by peer
	I1119 01:57:12.333826  305924 retry.go:31] will retry after 221.347052ms: ssh: handshake failed: read tcp 192.168.39.1:55912->192.168.39.195:22: read: connection reset by peer
	I1119 01:57:12.472579  305924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 01:57:12.472603  305924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 01:57:12.571737  305924 node_ready.go:35] waiting up to 6m0s for node "addons-218289" to be "Ready" ...
	I1119 01:57:12.580333  305924 node_ready.go:49] node "addons-218289" is "Ready"
	I1119 01:57:12.580370  305924 node_ready.go:38] duration metric: took 8.585021ms for node "addons-218289" to be "Ready" ...
	I1119 01:57:12.580389  305924 api_server.go:52] waiting for apiserver process to appear ...
	I1119 01:57:12.580446  305924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 01:57:12.650098  305924 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1119 01:57:12.650131  305924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1119 01:57:12.679881  305924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1119 01:57:12.789294  305924 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1119 01:57:12.789327  305924 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1119 01:57:12.792904  305924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1119 01:57:12.795606  305924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1119 01:57:12.807916  305924 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1119 01:57:12.807950  305924 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1119 01:57:12.827971  305924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1119 01:57:12.831454  305924 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1119 01:57:12.831487  305924 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1119 01:57:12.841225  305924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1119 01:57:12.885039  305924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 01:57:12.981747  305924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 01:57:13.063648  305924 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1119 01:57:13.063679  305924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1119 01:57:13.107157  305924 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1119 01:57:13.107180  305924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1119 01:57:13.157733  305924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1119 01:57:13.271192  305924 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1119 01:57:13.271230  305924 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1119 01:57:13.389206  305924 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1119 01:57:13.389236  305924 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1119 01:57:13.467536  305924 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1119 01:57:13.467568  305924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1119 01:57:13.506921  305924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1119 01:57:13.564481  305924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1119 01:57:13.632169  305924 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1119 01:57:13.632211  305924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1119 01:57:13.634908  305924 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1119 01:57:13.634945  305924 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1119 01:57:13.636814  305924 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1119 01:57:13.636842  305924 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1119 01:57:13.662151  305924 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1119 01:57:13.662185  305924 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1119 01:57:13.799107  305924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1119 01:57:13.961457  305924 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1119 01:57:13.961499  305924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1119 01:57:14.129950  305924 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1119 01:57:14.129978  305924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1119 01:57:14.270121  305924 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1119 01:57:14.270161  305924 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1119 01:57:14.271908  305924 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 01:57:14.271946  305924 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1119 01:57:14.377918  305924 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1119 01:57:14.377956  305924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1119 01:57:14.825288  305924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1119 01:57:14.835851  305924 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 01:57:14.835883  305924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1119 01:57:14.989097  305924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 01:57:15.124850  305924 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1119 01:57:15.124883  305924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1119 01:57:15.579788  305924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 01:57:15.991881  305924 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1119 01:57:15.991911  305924 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1119 01:57:16.459314  305924 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1119 01:57:16.459343  305924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1119 01:57:16.480529  305924 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.00789552s)
	I1119 01:57:16.480569  305924 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.900097191s)
	I1119 01:57:16.480576  305924 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1119 01:57:16.480602  305924 api_server.go:72] duration metric: took 4.404991186s to wait for apiserver process to appear ...
	I1119 01:57:16.480612  305924 api_server.go:88] waiting for apiserver healthz status ...
	I1119 01:57:16.480628  305924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.800714428s)
	I1119 01:57:16.480636  305924 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1119 01:57:16.480670  305924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.687735884s)
	I1119 01:57:16.492528  305924 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I1119 01:57:16.493793  305924 api_server.go:141] control plane version: v1.34.1
	I1119 01:57:16.493818  305924 api_server.go:131] duration metric: took 13.198463ms to wait for apiserver health ...
	I1119 01:57:16.493827  305924 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 01:57:16.507045  305924 system_pods.go:59] 10 kube-system pods found
	I1119 01:57:16.507083  305924 system_pods.go:61] "amd-gpu-device-plugin-nxbdq" [82477383-7556-4a81-a9eb-e1e97bc71ae3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 01:57:16.507090  305924 system_pods.go:61] "coredns-66bc5c9577-7g7r8" [ccbc7e95-365e-4990-ad59-6933bb689247] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:57:16.507098  305924 system_pods.go:61] "coredns-66bc5c9577-nwdcw" [a00a2f0d-7535-4aaf-8dbf-0164b16fa453] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:57:16.507103  305924 system_pods.go:61] "etcd-addons-218289" [81f3f0da-7a88-4de5-9e71-c4671785ab1b] Running
	I1119 01:57:16.507107  305924 system_pods.go:61] "kube-apiserver-addons-218289" [ca96723f-f379-4069-b286-cdbf162ac04d] Running
	I1119 01:57:16.507110  305924 system_pods.go:61] "kube-controller-manager-addons-218289" [0cbab2df-606c-4614-bf05-e6ca942a10e7] Running
	I1119 01:57:16.507116  305924 system_pods.go:61] "kube-proxy-pq5np" [6c6e814f-096f-4018-baab-6e6d62808c2b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 01:57:16.507119  305924 system_pods.go:61] "kube-scheduler-addons-218289" [cc7bb176-e5eb-4f69-b1c9-9a9fdc069169] Running
	I1119 01:57:16.507125  305924 system_pods.go:61] "nvidia-device-plugin-daemonset-skf2j" [9dc51c92-7707-49b7-b1af-1ffe9c693b40] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:57:16.507131  305924 system_pods.go:61] "registry-creds-764b6fb674-zkqkw" [a5392d9b-df88-40e6-a44c-11b54a51f70e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:57:16.507160  305924 system_pods.go:74] duration metric: took 13.304814ms to wait for pod list to return data ...
	I1119 01:57:16.507176  305924 default_sa.go:34] waiting for default service account to be created ...
	I1119 01:57:16.512622  305924 default_sa.go:45] found service account: "default"
	I1119 01:57:16.512645  305924 default_sa.go:55] duration metric: took 5.463448ms for default service account to be created ...
	I1119 01:57:16.512653  305924 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 01:57:16.574948  305924 system_pods.go:86] 10 kube-system pods found
	I1119 01:57:16.574993  305924 system_pods.go:89] "amd-gpu-device-plugin-nxbdq" [82477383-7556-4a81-a9eb-e1e97bc71ae3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 01:57:16.575002  305924 system_pods.go:89] "coredns-66bc5c9577-7g7r8" [ccbc7e95-365e-4990-ad59-6933bb689247] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:57:16.575010  305924 system_pods.go:89] "coredns-66bc5c9577-nwdcw" [a00a2f0d-7535-4aaf-8dbf-0164b16fa453] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:57:16.575014  305924 system_pods.go:89] "etcd-addons-218289" [81f3f0da-7a88-4de5-9e71-c4671785ab1b] Running
	I1119 01:57:16.575018  305924 system_pods.go:89] "kube-apiserver-addons-218289" [ca96723f-f379-4069-b286-cdbf162ac04d] Running
	I1119 01:57:16.575022  305924 system_pods.go:89] "kube-controller-manager-addons-218289" [0cbab2df-606c-4614-bf05-e6ca942a10e7] Running
	I1119 01:57:16.575027  305924 system_pods.go:89] "kube-proxy-pq5np" [6c6e814f-096f-4018-baab-6e6d62808c2b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 01:57:16.575031  305924 system_pods.go:89] "kube-scheduler-addons-218289" [cc7bb176-e5eb-4f69-b1c9-9a9fdc069169] Running
	I1119 01:57:16.575036  305924 system_pods.go:89] "nvidia-device-plugin-daemonset-skf2j" [9dc51c92-7707-49b7-b1af-1ffe9c693b40] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:57:16.575042  305924 system_pods.go:89] "registry-creds-764b6fb674-zkqkw" [a5392d9b-df88-40e6-a44c-11b54a51f70e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:57:16.575060  305924 retry.go:31] will retry after 205.83334ms: missing components: kube-proxy
	I1119 01:57:16.646458  305924 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1119 01:57:16.646483  305924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1119 01:57:16.791648  305924 system_pods.go:86] 10 kube-system pods found
	I1119 01:57:16.791694  305924 system_pods.go:89] "amd-gpu-device-plugin-nxbdq" [82477383-7556-4a81-a9eb-e1e97bc71ae3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 01:57:16.791709  305924 system_pods.go:89] "coredns-66bc5c9577-7g7r8" [ccbc7e95-365e-4990-ad59-6933bb689247] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:57:16.791719  305924 system_pods.go:89] "coredns-66bc5c9577-nwdcw" [a00a2f0d-7535-4aaf-8dbf-0164b16fa453] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:57:16.791724  305924 system_pods.go:89] "etcd-addons-218289" [81f3f0da-7a88-4de5-9e71-c4671785ab1b] Running
	I1119 01:57:16.791728  305924 system_pods.go:89] "kube-apiserver-addons-218289" [ca96723f-f379-4069-b286-cdbf162ac04d] Running
	I1119 01:57:16.791732  305924 system_pods.go:89] "kube-controller-manager-addons-218289" [0cbab2df-606c-4614-bf05-e6ca942a10e7] Running
	I1119 01:57:16.791737  305924 system_pods.go:89] "kube-proxy-pq5np" [6c6e814f-096f-4018-baab-6e6d62808c2b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 01:57:16.791741  305924 system_pods.go:89] "kube-scheduler-addons-218289" [cc7bb176-e5eb-4f69-b1c9-9a9fdc069169] Running
	I1119 01:57:16.791746  305924 system_pods.go:89] "nvidia-device-plugin-daemonset-skf2j" [9dc51c92-7707-49b7-b1af-1ffe9c693b40] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:57:16.791751  305924 system_pods.go:89] "registry-creds-764b6fb674-zkqkw" [a5392d9b-df88-40e6-a44c-11b54a51f70e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:57:16.791768  305924 retry.go:31] will retry after 379.930754ms: missing components: kube-proxy
	I1119 01:57:16.984578  305924 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1119 01:57:16.984608  305924 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1119 01:57:16.998996  305924 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-218289" context rescaled to 1 replicas
	I1119 01:57:17.180062  305924 system_pods.go:86] 10 kube-system pods found
	I1119 01:57:17.180108  305924 system_pods.go:89] "amd-gpu-device-plugin-nxbdq" [82477383-7556-4a81-a9eb-e1e97bc71ae3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 01:57:17.180119  305924 system_pods.go:89] "coredns-66bc5c9577-7g7r8" [ccbc7e95-365e-4990-ad59-6933bb689247] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:57:17.180130  305924 system_pods.go:89] "coredns-66bc5c9577-nwdcw" [a00a2f0d-7535-4aaf-8dbf-0164b16fa453] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:57:17.180137  305924 system_pods.go:89] "etcd-addons-218289" [81f3f0da-7a88-4de5-9e71-c4671785ab1b] Running
	I1119 01:57:17.180143  305924 system_pods.go:89] "kube-apiserver-addons-218289" [ca96723f-f379-4069-b286-cdbf162ac04d] Running
	I1119 01:57:17.180149  305924 system_pods.go:89] "kube-controller-manager-addons-218289" [0cbab2df-606c-4614-bf05-e6ca942a10e7] Running
	I1119 01:57:17.180154  305924 system_pods.go:89] "kube-proxy-pq5np" [6c6e814f-096f-4018-baab-6e6d62808c2b] Running
	I1119 01:57:17.180161  305924 system_pods.go:89] "kube-scheduler-addons-218289" [cc7bb176-e5eb-4f69-b1c9-9a9fdc069169] Running
	I1119 01:57:17.180168  305924 system_pods.go:89] "nvidia-device-plugin-daemonset-skf2j" [9dc51c92-7707-49b7-b1af-1ffe9c693b40] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:57:17.180174  305924 system_pods.go:89] "registry-creds-764b6fb674-zkqkw" [a5392d9b-df88-40e6-a44c-11b54a51f70e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:57:17.180190  305924 system_pods.go:126] duration metric: took 667.529555ms to wait for k8s-apps to be running ...
	I1119 01:57:17.180206  305924 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 01:57:17.180282  305924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 01:57:17.485443  305924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1119 01:57:19.584481  305924 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1119 01:57:19.587815  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:19.588301  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:57:19.588337  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:19.588565  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	I1119 01:57:19.926219  305924 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1119 01:57:20.073874  305924 addons.go:239] Setting addon gcp-auth=true in "addons-218289"
	I1119 01:57:20.073938  305924 host.go:66] Checking if "addons-218289" exists ...
	I1119 01:57:20.075750  305924 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1119 01:57:20.078524  305924 main.go:143] libmachine: domain addons-218289 has defined MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:20.078952  305924 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:2c:30", ip: ""} in network mk-addons-218289: {Iface:virbr1 ExpiryTime:2025-11-19 02:56:43 +0000 UTC Type:0 Mac:52:54:00:b3:2c:30 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-218289 Clientid:01:52:54:00:b3:2c:30}
	I1119 01:57:20.078980  305924 main.go:143] libmachine: domain addons-218289 has defined IP address 192.168.39.195 and MAC address 52:54:00:b3:2c:30 in network mk-addons-218289
	I1119 01:57:20.079141  305924 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21924-301472/.minikube/machines/addons-218289/id_rsa Username:docker}
	I1119 01:57:21.845932  305924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.050284657s)
	I1119 01:57:21.845991  305924 addons.go:480] Verifying addon ingress=true in "addons-218289"
	I1119 01:57:21.846001  305924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.017985272s)
	I1119 01:57:21.846084  305924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (9.004804688s)
	I1119 01:57:21.846164  305924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.961099184s)
	I1119 01:57:21.846208  305924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.864433269s)
	I1119 01:57:21.846355  305924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.688595038s)
	I1119 01:57:21.846430  305924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.339476648s)
	I1119 01:57:21.846476  305924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.281957376s)
	I1119 01:57:21.846510  305924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.047369142s)
	I1119 01:57:21.846525  305924 addons.go:480] Verifying addon registry=true in "addons-218289"
	I1119 01:57:21.846547  305924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.021230324s)
	I1119 01:57:21.846663  305924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.857518448s)
	I1119 01:57:21.846681  305924 addons.go:480] Verifying addon metrics-server=true in "addons-218289"
	I1119 01:57:21.847806  305924 out.go:179] * Verifying registry addon...
	I1119 01:57:21.847839  305924 out.go:179] * Verifying ingress addon...
	I1119 01:57:21.847811  305924 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-218289 service yakd-dashboard -n yakd-dashboard
	
	I1119 01:57:21.850077  305924 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1119 01:57:21.850836  305924 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1119 01:57:21.920110  305924 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1119 01:57:21.920144  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:21.920447  305924 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1119 01:57:21.920472  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:57:21.946139  305924 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1119 01:57:22.338688  305924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.758845928s)
	W1119 01:57:22.338743  305924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1119 01:57:22.338761  305924 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.158443403s)
	I1119 01:57:22.338804  305924 system_svc.go:56] duration metric: took 5.158593049s WaitForService to wait for kubelet
	I1119 01:57:22.338822  305924 kubeadm.go:587] duration metric: took 10.263210393s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 01:57:22.338848  305924 node_conditions.go:102] verifying NodePressure condition ...
	I1119 01:57:22.338772  305924 retry.go:31] will retry after 215.156546ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1119 01:57:22.373323  305924 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 01:57:22.373366  305924 node_conditions.go:123] node cpu capacity is 2
	I1119 01:57:22.373382  305924 node_conditions.go:105] duration metric: took 34.523669ms to run NodePressure ...
	I1119 01:57:22.373397  305924 start.go:242] waiting for startup goroutines ...
	I1119 01:57:22.379061  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:22.395863  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:22.554669  305924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 01:57:22.864624  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:22.865415  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:23.126059  305924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.640542894s)
	I1119 01:57:23.126112  305924 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.050327289s)
	I1119 01:57:23.126120  305924 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-218289"
	I1119 01:57:23.127845  305924 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 01:57:23.127850  305924 out.go:179] * Verifying csi-hostpath-driver addon...
	I1119 01:57:23.129809  305924 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1119 01:57:23.130555  305924 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1119 01:57:23.130927  305924 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1119 01:57:23.130951  305924 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1119 01:57:23.158281  305924 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1119 01:57:23.158308  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:23.303733  305924 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1119 01:57:23.303764  305924 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1119 01:57:23.358415  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:23.364512  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:23.436828  305924 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1119 01:57:23.436857  305924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1119 01:57:23.529334  305924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1119 01:57:23.637322  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:23.859462  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:23.859547  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:24.138579  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:24.356158  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:24.356413  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:24.589909  305924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.03517969s)
	I1119 01:57:24.636426  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:24.864313  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:24.865783  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:25.146997  305924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.617621378s)
	I1119 01:57:25.148227  305924 addons.go:480] Verifying addon gcp-auth=true in "addons-218289"
	I1119 01:57:25.149814  305924 out.go:179] * Verifying gcp-auth addon...
	I1119 01:57:25.151615  305924 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1119 01:57:25.163583  305924 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1119 01:57:25.163604  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:25.164081  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:25.363769  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:25.363779  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:25.638040  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:25.654772  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:25.863534  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:25.863584  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:26.139200  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:26.161089  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:26.364660  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:26.364698  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:26.636594  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:26.739496  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:26.862991  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:26.865882  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:27.142230  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:27.159218  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:27.357692  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:27.357795  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:27.640651  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:27.658043  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:27.856494  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:27.858296  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:28.137128  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:28.157052  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:28.356915  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:28.357204  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:28.635228  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:28.655130  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:28.862370  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:28.863068  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:29.134903  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:29.155640  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:29.357160  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:29.357323  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:29.636832  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:29.657788  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:29.854800  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:29.856062  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:30.136795  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:30.155769  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:30.354291  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:30.356202  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:30.635878  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:30.655307  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:30.853735  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:30.856142  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:31.138190  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:31.155033  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:31.355479  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:31.355502  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:31.634560  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:31.655601  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:31.855860  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:31.858827  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:32.136996  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:32.158243  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:32.355933  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:32.362104  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:32.646290  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:32.658856  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:32.857644  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:32.857706  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:33.134405  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:33.157898  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:33.359806  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:33.360612  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:33.635442  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:33.656303  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:33.863588  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:33.863762  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:34.135428  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:34.155694  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:34.358490  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:34.363143  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:34.637010  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:34.656958  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:34.859270  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:34.862584  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:35.138907  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:35.156087  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:35.356850  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:35.360288  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:35.664465  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:35.666263  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:35.857327  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:35.858430  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:36.139042  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:36.156607  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:36.357463  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:36.359107  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:36.637864  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:36.657845  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:36.856677  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:36.858699  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:37.135448  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:37.156088  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:37.354413  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:37.355103  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:37.659623  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:37.659647  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:37.854468  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:37.854553  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:38.599489  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:38.612642  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:38.614812  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:38.614877  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:38.710939  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:38.711263  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:38.855879  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:38.857868  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:39.137750  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:39.158192  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:39.358324  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:39.361448  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:39.638386  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:39.656203  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:39.853002  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:39.855178  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:40.137852  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:40.158462  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:40.358757  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:40.359185  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:40.658315  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:40.663316  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:40.855594  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:40.859648  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:41.138848  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:41.157659  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:41.356457  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:41.358622  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:41.637262  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:41.656573  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:41.881535  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:41.901663  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:42.136411  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:42.155204  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:42.353466  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:42.361269  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:42.636268  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:42.658201  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:42.857395  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:42.857435  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:43.134808  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:43.157288  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:43.357819  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:43.364612  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:43.635966  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:43.656431  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:43.862345  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:43.864705  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:44.137295  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:44.155851  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:44.362440  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:44.364665  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:44.637803  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:44.656555  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:44.854944  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:44.855938  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:45.137354  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:45.155473  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:45.359141  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:45.361885  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:45.672417  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:45.672988  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:45.854038  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:45.854635  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:46.135095  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:46.156789  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:46.355853  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:46.355934  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:46.639260  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:46.658854  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:46.855214  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:46.855711  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:47.134198  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:47.156576  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:47.358116  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:47.359226  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:47.644472  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:47.655409  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:47.855468  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:47.855794  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:48.136397  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:48.161032  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:48.355024  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:48.355281  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:48.635530  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:48.656955  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:48.858106  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:48.859508  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:49.134382  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:49.155356  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:49.353802  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:57:49.356214  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:49.637027  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:49.654750  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:49.856230  305924 kapi.go:107] duration metric: took 28.006145897s to wait for kubernetes.io/minikube-addons=registry ...
	I1119 01:57:49.857292  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:50.136225  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:50.155294  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:50.354860  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:50.640232  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:50.654930  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:50.855790  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:51.139414  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:51.156080  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:51.358585  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:51.637126  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:51.656316  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:51.859684  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:52.136696  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:52.156282  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:52.355499  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:52.640899  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:52.655313  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:52.857754  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:53.137768  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:53.155566  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:53.359320  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:53.733406  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:53.735129  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:53.855421  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:54.136463  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:54.156331  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:54.355236  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:54.642795  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:54.739692  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:54.855106  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:55.136121  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:55.154764  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:55.355762  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:55.636409  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:55.655894  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:55.855646  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:56.134385  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:56.156587  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:56.371561  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:56.641752  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:56.660035  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:56.865966  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:57.138955  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:57.155182  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:57.356798  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:57.637092  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:57.658501  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:57.867389  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:58.138961  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:58.155405  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:58.355022  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:58.637385  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:58.659442  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:59.085320  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:59.188438  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:59.188775  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:59.362362  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:57:59.638219  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:57:59.656421  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:57:59.857176  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:00.135601  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:00.156179  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:00.355296  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:00.635688  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:00.658572  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:00.857361  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:01.136659  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:01.159367  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:01.356319  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:01.638694  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:01.656175  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:01.858878  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:02.137044  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:02.158371  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:02.361272  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:02.634941  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:02.655815  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:02.854142  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:03.141512  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:03.155450  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:03.355549  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:03.639356  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:03.658868  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:03.868259  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:04.136214  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:04.156609  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:04.356324  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:04.634839  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:04.654920  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:04.856739  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:05.137362  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:05.155218  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:05.358833  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:05.636670  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:05.656437  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:05.870399  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:06.137230  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:06.156617  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:06.356099  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:06.639568  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:06.658733  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:06.867476  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:07.140653  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:07.157064  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:07.357676  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:07.634214  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:07.656215  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:07.855186  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:08.139240  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:08.239748  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:08.358290  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:08.635895  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:08.656321  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:08.856230  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:09.143113  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:09.159631  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:09.782438  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:09.792933  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:09.793001  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:09.863089  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:10.135741  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:10.236327  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:10.357585  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:10.634091  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:10.656888  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:10.855576  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:11.138084  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:11.154729  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:11.357828  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:11.642931  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:11.657339  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:11.865172  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:12.143048  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:12.155033  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:12.358039  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:12.634894  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:12.657417  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:12.858366  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:13.137812  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:13.157854  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:13.356771  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:13.634889  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:13.655888  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:13.857574  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:14.134674  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:14.157737  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:14.354354  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:14.635339  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:14.657086  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:14.859737  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:15.134798  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:15.158394  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:15.355790  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:15.637607  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:15.657388  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:15.859890  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:16.135196  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:16.155804  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:16.356594  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:16.637689  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:16.656598  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:16.856906  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:17.143326  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:17.155534  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:17.355643  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:17.636922  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:17.656343  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:17.856756  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:18.135691  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:18.157508  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:18.355942  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:18.639163  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:18.655931  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:18.854532  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:19.135089  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:19.156083  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:19.355824  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:19.634506  305924 kapi.go:107] duration metric: took 56.503948355s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1119 01:58:19.655157  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:19.855469  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:20.155855  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:20.505833  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:20.655510  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:20.855036  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:21.157121  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:21.356556  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:21.655660  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:21.855576  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:22.155990  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:22.355763  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:22.655356  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:22.854802  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:23.156152  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:23.354896  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:23.655601  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:23.855984  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:24.155866  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:24.355299  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:24.656139  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:24.855347  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:25.156067  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:25.355599  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:25.655646  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:25.857567  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:26.155702  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:26.357624  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:26.658694  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:26.858157  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:27.158257  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:27.356350  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:27.659273  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:27.859020  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:28.155449  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:28.356292  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:28.658132  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:28.854923  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:29.158901  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:29.354889  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:29.655122  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:29.856062  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:30.159823  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:30.394307  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:30.662059  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:30.856134  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:31.158394  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:31.354633  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:31.655490  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:31.857906  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:32.157960  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:32.355873  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:32.656288  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:32.855803  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:33.156684  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:33.355487  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:33.661590  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:33.859649  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:34.155233  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:34.355200  305924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:34.656677  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:34.855995  305924 kapi.go:107] duration metric: took 1m13.005151486s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1119 01:58:35.159473  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:35.657807  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:36.156917  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:36.655632  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:37.156033  305924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:37.656698  305924 kapi.go:107] duration metric: took 1m12.505080499s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1119 01:58:37.658286  305924 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-218289 cluster.
	I1119 01:58:37.659409  305924 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1119 01:58:37.660604  305924 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1119 01:58:37.662017  305924 out.go:179] * Enabled addons: nvidia-device-plugin, registry-creds, ingress-dns, amd-gpu-device-plugin, storage-provisioner, inspektor-gadget, cloud-spanner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1119 01:58:37.663133  305924 addons.go:515] duration metric: took 1m25.587642578s for enable addons: enabled=[nvidia-device-plugin registry-creds ingress-dns amd-gpu-device-plugin storage-provisioner inspektor-gadget cloud-spanner metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1119 01:58:37.663183  305924 start.go:247] waiting for cluster config update ...
	I1119 01:58:37.663208  305924 start.go:256] writing updated cluster config ...
	I1119 01:58:37.663527  305924 ssh_runner.go:195] Run: rm -f paused
	I1119 01:58:37.670523  305924 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 01:58:37.675369  305924 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nwdcw" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:37.685708  305924 pod_ready.go:94] pod "coredns-66bc5c9577-nwdcw" is "Ready"
	I1119 01:58:37.685731  305924 pod_ready.go:86] duration metric: took 10.339485ms for pod "coredns-66bc5c9577-nwdcw" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:37.691615  305924 pod_ready.go:83] waiting for pod "etcd-addons-218289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:37.697599  305924 pod_ready.go:94] pod "etcd-addons-218289" is "Ready"
	I1119 01:58:37.697620  305924 pod_ready.go:86] duration metric: took 5.982951ms for pod "etcd-addons-218289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:37.700492  305924 pod_ready.go:83] waiting for pod "kube-apiserver-addons-218289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:37.705668  305924 pod_ready.go:94] pod "kube-apiserver-addons-218289" is "Ready"
	I1119 01:58:37.705687  305924 pod_ready.go:86] duration metric: took 5.176618ms for pod "kube-apiserver-addons-218289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:37.708153  305924 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-218289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:38.075410  305924 pod_ready.go:94] pod "kube-controller-manager-addons-218289" is "Ready"
	I1119 01:58:38.075447  305924 pod_ready.go:86] duration metric: took 367.275582ms for pod "kube-controller-manager-addons-218289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:38.275976  305924 pod_ready.go:83] waiting for pod "kube-proxy-pq5np" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:38.675467  305924 pod_ready.go:94] pod "kube-proxy-pq5np" is "Ready"
	I1119 01:58:38.675510  305924 pod_ready.go:86] duration metric: took 399.502383ms for pod "kube-proxy-pq5np" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:38.875656  305924 pod_ready.go:83] waiting for pod "kube-scheduler-addons-218289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:39.274832  305924 pod_ready.go:94] pod "kube-scheduler-addons-218289" is "Ready"
	I1119 01:58:39.274866  305924 pod_ready.go:86] duration metric: took 399.178627ms for pod "kube-scheduler-addons-218289" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 01:58:39.274883  305924 pod_ready.go:40] duration metric: took 1.604329987s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 01:58:39.321722  305924 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 01:58:39.323627  305924 out.go:179] * Done! kubectl is now configured to use "addons-218289" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.359480748Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763517944359450219,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=263786df-bff1-4dbc-b988-00a07c0d5807 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.360511635Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a29794a-4bc5-4d2b-aa2a-cba1cd837abe name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.360885745Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a29794a-4bc5-4d2b-aa2a-cba1cd837abe name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.362272385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:727afc2dec9495b8e2f91a60222523d7252fd1f44b309b19943be13123ed65e7,PodSandboxId:99b84a617cd04f7734ad7c85fb4cbeedb564bc159ddee6b15a6d35f69ff68df5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763517559203911768,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 135bfd36-2352-4de6-a595-ee44e83d5f6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67baaf793d5ef013a45d181f9504e2c6fd17c9fb5f1373a4f4ef6ac1348cbdad,PodSandboxId:3a9e9486c21fe6e045bb0e64e2f0108c95c11661d1db979a7f5cee437483c630,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763517522841787274,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1e6e7f2-2038-4edf-9d9f-e0df8b042b38,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dffcb6433b668923ccaa6b68ea5f5703d5611da2a93474d9ab5f5b10a1e485f,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763517498503962204,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf4a17c8-dce0-48f7-9926-c6899341a09b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5b76ce050ac0b3293632738f076a766f3d6fd3b58d01c3cc1d6a0382aef09a9,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763517496942705917,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf4a17c8-dce0-48f7-9926-c6899341a09b,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e22fcb254c304c9a07d90f714674bcd58f8d9c63eb5476823fa98072f9811c23,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763517495161054677,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf4a17c8-dce0-48f7-992
6-c6899341a09b,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5475cb51623e4e67a9eae99b521cc1140854b924b0714f75c29d54c94f9f2f,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763517494172986700,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
f4a17c8-dce0-48f7-9926-c6899341a09b,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d4f15cc1a6d79122d6460fec61497605c71845d766322be74f6614251c8e13d,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763517492721999486,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf4a17c8-dce0-48f7-9926-c6899341a09b,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97db18b98e110be070ae35ee73138fd11c5ba3b17c9aaa484991c43d763e9a55,PodSandboxId:5b4521ab4fd929c1351922cc81716ffe1d45268180044fa596eb990cf29651e3,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763517491407585947
,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e677e88-f1a1-4349-ab4a-ca5fc1625de0,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4fb977d3bdbac0a79d2003f13473ec9efa02e41788b6915f23ed2b6dc8e649d,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d87
19a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763517489905899642,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf4a17c8-dce0-48f7-9926-c6899341a09b,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bec5948a3b0b6d85663bcc54a7678b1ad00c0a0cd97e6ccb2e389ca190e5b7d,PodSandboxId:459a8af32380f9c4e3915bb69cfc3749e47cb6b9cdcf7011dce68c410d10aad4,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763517487952068207,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-vfht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eed086c0-f1c8-43c4-8169-ce475fdfdd33,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291bbb19b04d1ef6f333fdd39da87aae78308f4944aacdcd99cd300c7ed8e316,PodSandboxId:7c0efe47e19fcb4a97b5cbe98e9d037d6810d61399046721cc66f827cedb0024,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519
d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763517487803557432,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b814d83-143e-4277-9129-6a9219cccb21,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c939ec33be6abea25e835db0a4c1feea78964b94ab98833ad5d89d1c2969f5f,PodSandboxId:4dd16219bd3ea2aca807ff7423efff479ca92ef70d7338bd1610f996a0c5101e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763517484972917705,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-v87lj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e442212e-6930-4bc6-8062-f14b3d34e047,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c37385089e2fd5010e4c5eed6eb2712cdc98d10f0ddfdff3c63b06d2d65a8a5,PodSandboxId:fbdc9dfa091b63143a4d3ff2c7bae60d86c5ed70095351832d43471dea29907c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Ima
ge:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763517477451146665,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-hlxxx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3d1a057b-0eec-4bb7-beaa-1697b59b68a5,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e09ec1e87aeccfdc35d357732ec75950e2f59b3b345e2c10a81275cb3fd018,PodSandboxId:6cab7defc307586598251f43f2edc3e257df822f26b7b122e33369092a2a44d8,Metadata:&Container
Metadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763517450386400140,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-nxbdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82477383-7556-4a81-a9eb-e1e97bc71ae3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806aed04c77717c07bd0dbbcbf0c24802085be59805c4afc5aa5d61c065570a6,PodSandboxId:1468403c01b0cf510196b0f96d2d25272abacbf8f95f
3845f1675d800d5c8c86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763517445914785294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af62a255-b0ad-411e-b2ac-cac4da21796a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad27302e7041367cc6999d8521c1f0fc9a761832f47f5bebb2b72835c3a338f,PodSandboxId:4eeb349dff4d7e5fb6ba73695a192f6af5752a740d2d9a3bd8fcefbf
b3d9c783,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763517435514069247,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pq5np,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6e814f-096f-4018-baab-6e6d62808c2b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ea8a3b749f7ca60e18092d90f58348f2fddd8e74f8ef03e750ee4eb5947b6f,PodSandboxId:d0c41d7971ab4302c77ee6d81c1f842b405c6eb42cd303a7db5d1723603cc48c,Metadata:&ContainerMetad
ata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763517435224289288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-nwdcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a00a2f0d-7535-4aaf-8dbf-0164b16fa453,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c20b8a8f389ee678e1ebf3800604f594655c499015319f393b3666a64dfdd0,PodSandboxId:30d3fb9e3ad1a2fda31d8f0811c7643c244dd8f098c4cb71cb91f9ecc5db457a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763517421833063054,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-218289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6bc89227a9990d7a51273102785bed2,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc2a41a0f970aa4059e28629a2cd153101214d9bf53eedec076be56afff321f,PodSandboxId:4ce9c90eeda6fcf4668b70de0669b691a9d3c807ec138b85414fc457700455c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763517421782552463,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-218289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d727631b7e92247f04647b78bf14acb0,},Annotations:map[string]string{io.kubernetes.container.hash: e
9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a65806a2e7e76d3fc500c7e2f35b2715ad570f9d2f4893ad372d68111a601d4a,PodSandboxId:0c940fab7a5ca713f08fb57a3e5f0f45784c539af6a9e0fafb100229445f9c55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763517421763146896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-218289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: af361a144fce44f41040a127542ce6bd,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abc14296f62cde52576ee82d465666b680625f61b99e1b663d31842e81befe0f,PodSandboxId:7969760c69308e9c23f52b0994b20aed06f1308f51cd1260a99649de5ed421b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763517421762217261,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-218289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebcac38048370cbd5c5cbfa0b8ec4638,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a29794a-4bc5-4d2b-aa2a-cba1cd837abe name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.401511481Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3665757-5712-4781-a652-2c8295eff40e name=/runtime.v1.RuntimeService/Version
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.401675483Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3665757-5712-4781-a652-2c8295eff40e name=/runtime.v1.RuntimeService/Version
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.403367336Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ca33192-2e89-43d1-88a4-1ac66eb5dd22 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.404617441Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763517944404592752,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ca33192-2e89-43d1-88a4-1ac66eb5dd22 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.405521057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70bf3be3-5958-4c25-81d2-1782fbbdee56 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.405581406Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70bf3be3-5958-4c25-81d2-1782fbbdee56 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.406238214Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:727afc2dec9495b8e2f91a60222523d7252fd1f44b309b19943be13123ed65e7,PodSandboxId:99b84a617cd04f7734ad7c85fb4cbeedb564bc159ddee6b15a6d35f69ff68df5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763517559203911768,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 135bfd36-2352-4de6-a595-ee44e83d5f6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67baaf793d5ef013a45d181f9504e2c6fd17c9fb5f1373a4f4ef6ac1348cbdad,PodSandboxId:3a9e9486c21fe6e045bb0e64e2f0108c95c11661d1db979a7f5cee437483c630,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763517522841787274,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1e6e7f2-2038-4edf-9d9f-e0df8b042b38,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dffcb6433b668923ccaa6b68ea5f5703d5611da2a93474d9ab5f5b10a1e485f,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763517498503962204,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf4a17c8-dce0-48f7-9926-c6899341a09b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5b76ce050ac0b3293632738f076a766f3d6fd3b58d01c3cc1d6a0382aef09a9,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763517496942705917,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf4a17c8-dce0-48f7-9926-c6899341a09b,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e22fcb254c304c9a07d90f714674bcd58f8d9c63eb5476823fa98072f9811c23,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763517495161054677,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf4a17c8-dce0-48f7-992
6-c6899341a09b,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5475cb51623e4e67a9eae99b521cc1140854b924b0714f75c29d54c94f9f2f,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763517494172986700,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
f4a17c8-dce0-48f7-9926-c6899341a09b,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d4f15cc1a6d79122d6460fec61497605c71845d766322be74f6614251c8e13d,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763517492721999486,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf4a17c8-dce0-48f7-9926-c6899341a09b,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97db18b98e110be070ae35ee73138fd11c5ba3b17c9aaa484991c43d763e9a55,PodSandboxId:5b4521ab4fd929c1351922cc81716ffe1d45268180044fa596eb990cf29651e3,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763517491407585947
,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e677e88-f1a1-4349-ab4a-ca5fc1625de0,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4fb977d3bdbac0a79d2003f13473ec9efa02e41788b6915f23ed2b6dc8e649d,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d87
19a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763517489905899642,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf4a17c8-dce0-48f7-9926-c6899341a09b,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bec5948a3b0b6d85663bcc54a7678b1ad00c0a0cd97e6ccb2e389ca190e5b7d,PodSandboxId:459a8af32380f9c4e3915bb69cfc3749e47cb6b9cdcf7011dce68c410d10aad4,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763517487952068207,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-vfht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eed086c0-f1c8-43c4-8169-ce475fdfdd33,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291bbb19b04d1ef6f333fdd39da87aae78308f4944aacdcd99cd300c7ed8e316,PodSandboxId:7c0efe47e19fcb4a97b5cbe98e9d037d6810d61399046721cc66f827cedb0024,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519
d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763517487803557432,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b814d83-143e-4277-9129-6a9219cccb21,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c939ec33be6abea25e835db0a4c1feea78964b94ab98833ad5d89d1c2969f5f,PodSandboxId:4dd16219bd3ea2aca807ff7423efff479ca92ef70d7338bd1610f996a0c5101e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763517484972917705,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-v87lj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e442212e-6930-4bc6-8062-f14b3d34e047,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c37385089e2fd5010e4c5eed6eb2712cdc98d10f0ddfdff3c63b06d2d65a8a5,PodSandboxId:fbdc9dfa091b63143a4d3ff2c7bae60d86c5ed70095351832d43471dea29907c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Ima
ge:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763517477451146665,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-hlxxx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3d1a057b-0eec-4bb7-beaa-1697b59b68a5,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e09ec1e87aeccfdc35d357732ec75950e2f59b3b345e2c10a81275cb3fd018,PodSandboxId:6cab7defc307586598251f43f2edc3e257df822f26b7b122e33369092a2a44d8,Metadata:&Container
Metadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763517450386400140,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-nxbdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82477383-7556-4a81-a9eb-e1e97bc71ae3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806aed04c77717c07bd0dbbcbf0c24802085be59805c4afc5aa5d61c065570a6,PodSandboxId:1468403c01b0cf510196b0f96d2d25272abacbf8f95f
3845f1675d800d5c8c86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763517445914785294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af62a255-b0ad-411e-b2ac-cac4da21796a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad27302e7041367cc6999d8521c1f0fc9a761832f47f5bebb2b72835c3a338f,PodSandboxId:4eeb349dff4d7e5fb6ba73695a192f6af5752a740d2d9a3bd8fcefbf
b3d9c783,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763517435514069247,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pq5np,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6e814f-096f-4018-baab-6e6d62808c2b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ea8a3b749f7ca60e18092d90f58348f2fddd8e74f8ef03e750ee4eb5947b6f,PodSandboxId:d0c41d7971ab4302c77ee6d81c1f842b405c6eb42cd303a7db5d1723603cc48c,Metadata:&ContainerMetad
ata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763517435224289288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-nwdcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a00a2f0d-7535-4aaf-8dbf-0164b16fa453,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c20b8a8f389ee678e1ebf3800604f594655c499015319f393b3666a64dfdd0,PodSandboxId:30d3fb9e3ad1a2fda31d8f0811c7643c244dd8f098c4cb71cb91f9ecc5db457a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763517421833063054,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-218289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6bc89227a9990d7a51273102785bed2,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc2a41a0f970aa4059e28629a2cd153101214d9bf53eedec076be56afff321f,PodSandboxId:4ce9c90eeda6fcf4668b70de0669b691a9d3c807ec138b85414fc457700455c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763517421782552463,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-218289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d727631b7e92247f04647b78bf14acb0,},Annotations:map[string]string{io.kubernetes.container.hash: e
9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a65806a2e7e76d3fc500c7e2f35b2715ad570f9d2f4893ad372d68111a601d4a,PodSandboxId:0c940fab7a5ca713f08fb57a3e5f0f45784c539af6a9e0fafb100229445f9c55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763517421763146896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-218289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: af361a144fce44f41040a127542ce6bd,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abc14296f62cde52576ee82d465666b680625f61b99e1b663d31842e81befe0f,PodSandboxId:7969760c69308e9c23f52b0994b20aed06f1308f51cd1260a99649de5ed421b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763517421762217261,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-218289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebcac38048370cbd5c5cbfa0b8ec4638,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=70bf3be3-5958-4c25-81d2-1782fbbdee56 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.440987138Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=257971cb-3c5e-4a41-9eb4-a008d8e0fca0 name=/runtime.v1.RuntimeService/Version
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.441080217Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=257971cb-3c5e-4a41-9eb4-a008d8e0fca0 name=/runtime.v1.RuntimeService/Version
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.443162014Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8cd71b1d-a472-448c-9cfc-3522d3941ce3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.445229813Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763517944444792391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8cd71b1d-a472-448c-9cfc-3522d3941ce3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.446269506Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dbac0533-626f-4fd4-96ca-833d660fffe0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.446375286Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dbac0533-626f-4fd4-96ca-833d660fffe0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.446898991Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:727afc2dec9495b8e2f91a60222523d7252fd1f44b309b19943be13123ed65e7,PodSandboxId:99b84a617cd04f7734ad7c85fb4cbeedb564bc159ddee6b15a6d35f69ff68df5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763517559203911768,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 135bfd36-2352-4de6-a595-ee44e83d5f6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67baaf793d5ef013a45d181f9504e2c6fd17c9fb5f1373a4f4ef6ac1348cbdad,PodSandboxId:3a9e9486c21fe6e045bb0e64e2f0108c95c11661d1db979a7f5cee437483c630,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763517522841787274,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1e6e7f2-2038-4edf-9d9f-e0df8b042b38,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dffcb6433b668923ccaa6b68ea5f5703d5611da2a93474d9ab5f5b10a1e485f,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763517498503962204,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf4a17c8-dce0-48f7-9926-c6899341a09b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5b76ce050ac0b3293632738f076a766f3d6fd3b58d01c3cc1d6a0382aef09a9,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763517496942705917,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf4a17c8-dce0-48f7-9926-c6899341a09b,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e22fcb254c304c9a07d90f714674bcd58f8d9c63eb5476823fa98072f9811c23,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763517495161054677,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf4a17c8-dce0-48f7-992
6-c6899341a09b,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5475cb51623e4e67a9eae99b521cc1140854b924b0714f75c29d54c94f9f2f,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763517494172986700,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
f4a17c8-dce0-48f7-9926-c6899341a09b,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d4f15cc1a6d79122d6460fec61497605c71845d766322be74f6614251c8e13d,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763517492721999486,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf4a17c8-dce0-48f7-9926-c6899341a09b,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97db18b98e110be070ae35ee73138fd11c5ba3b17c9aaa484991c43d763e9a55,PodSandboxId:5b4521ab4fd929c1351922cc81716ffe1d45268180044fa596eb990cf29651e3,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763517491407585947
,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e677e88-f1a1-4349-ab4a-ca5fc1625de0,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4fb977d3bdbac0a79d2003f13473ec9efa02e41788b6915f23ed2b6dc8e649d,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d87
19a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763517489905899642,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf4a17c8-dce0-48f7-9926-c6899341a09b,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bec5948a3b0b6d85663bcc54a7678b1ad00c0a0cd97e6ccb2e389ca190e5b7d,PodSandboxId:459a8af32380f9c4e3915bb69cfc3749e47cb6b9cdcf7011dce68c410d10aad4,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763517487952068207,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-vfht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eed086c0-f1c8-43c4-8169-ce475fdfdd33,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291bbb19b04d1ef6f333fdd39da87aae78308f4944aacdcd99cd300c7ed8e316,PodSandboxId:7c0efe47e19fcb4a97b5cbe98e9d037d6810d61399046721cc66f827cedb0024,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519
d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763517487803557432,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b814d83-143e-4277-9129-6a9219cccb21,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c939ec33be6abea25e835db0a4c1feea78964b94ab98833ad5d89d1c2969f5f,PodSandboxId:4dd16219bd3ea2aca807ff7423efff479ca92ef70d7338bd1610f996a0c5101e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763517484972917705,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-v87lj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e442212e-6930-4bc6-8062-f14b3d34e047,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c37385089e2fd5010e4c5eed6eb2712cdc98d10f0ddfdff3c63b06d2d65a8a5,PodSandboxId:fbdc9dfa091b63143a4d3ff2c7bae60d86c5ed70095351832d43471dea29907c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Ima
ge:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763517477451146665,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-hlxxx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3d1a057b-0eec-4bb7-beaa-1697b59b68a5,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e09ec1e87aeccfdc35d357732ec75950e2f59b3b345e2c10a81275cb3fd018,PodSandboxId:6cab7defc307586598251f43f2edc3e257df822f26b7b122e33369092a2a44d8,Metadata:&Container
Metadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763517450386400140,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-nxbdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82477383-7556-4a81-a9eb-e1e97bc71ae3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806aed04c77717c07bd0dbbcbf0c24802085be59805c4afc5aa5d61c065570a6,PodSandboxId:1468403c01b0cf510196b0f96d2d25272abacbf8f95f
3845f1675d800d5c8c86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763517445914785294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af62a255-b0ad-411e-b2ac-cac4da21796a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad27302e7041367cc6999d8521c1f0fc9a761832f47f5bebb2b72835c3a338f,PodSandboxId:4eeb349dff4d7e5fb6ba73695a192f6af5752a740d2d9a3bd8fcefbf
b3d9c783,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763517435514069247,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pq5np,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6e814f-096f-4018-baab-6e6d62808c2b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ea8a3b749f7ca60e18092d90f58348f2fddd8e74f8ef03e750ee4eb5947b6f,PodSandboxId:d0c41d7971ab4302c77ee6d81c1f842b405c6eb42cd303a7db5d1723603cc48c,Metadata:&ContainerMetad
ata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763517435224289288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-nwdcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a00a2f0d-7535-4aaf-8dbf-0164b16fa453,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c20b8a8f389ee678e1ebf3800604f594655c499015319f393b3666a64dfdd0,PodSandboxId:30d3fb9e3ad1a2fda31d8f0811c7643c244dd8f098c4cb71cb91f9ecc5db457a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763517421833063054,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-218289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6bc89227a9990d7a51273102785bed2,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc2a41a0f970aa4059e28629a2cd153101214d9bf53eedec076be56afff321f,PodSandboxId:4ce9c90eeda6fcf4668b70de0669b691a9d3c807ec138b85414fc457700455c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763517421782552463,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-218289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d727631b7e92247f04647b78bf14acb0,},Annotations:map[string]string{io.kubernetes.container.hash: e
9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a65806a2e7e76d3fc500c7e2f35b2715ad570f9d2f4893ad372d68111a601d4a,PodSandboxId:0c940fab7a5ca713f08fb57a3e5f0f45784c539af6a9e0fafb100229445f9c55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763517421763146896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-218289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: af361a144fce44f41040a127542ce6bd,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abc14296f62cde52576ee82d465666b680625f61b99e1b663d31842e81befe0f,PodSandboxId:7969760c69308e9c23f52b0994b20aed06f1308f51cd1260a99649de5ed421b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763517421762217261,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-218289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebcac38048370cbd5c5cbfa0b8ec4638,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dbac0533-626f-4fd4-96ca-833d660fffe0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.479371188Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2b797fee-672d-4375-82c1-ec1bd24d231e name=/runtime.v1.RuntimeService/Version
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.479463702Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2b797fee-672d-4375-82c1-ec1bd24d231e name=/runtime.v1.RuntimeService/Version
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.480877759Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=85300c78-0c06-4c4e-b7ec-c089802614e5 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.482149932Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763517944482122317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=85300c78-0c06-4c4e-b7ec-c089802614e5 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.483398821Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ffb8f06-a0a0-4a3f-b571-ff4fed8bfbb3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.483560977Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ffb8f06-a0a0-4a3f-b571-ff4fed8bfbb3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 02:05:44 addons-218289 crio[816]: time="2025-11-19 02:05:44.484094831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:727afc2dec9495b8e2f91a60222523d7252fd1f44b309b19943be13123ed65e7,PodSandboxId:99b84a617cd04f7734ad7c85fb4cbeedb564bc159ddee6b15a6d35f69ff68df5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763517559203911768,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 135bfd36-2352-4de6-a595-ee44e83d5f6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67baaf793d5ef013a45d181f9504e2c6fd17c9fb5f1373a4f4ef6ac1348cbdad,PodSandboxId:3a9e9486c21fe6e045bb0e64e2f0108c95c11661d1db979a7f5cee437483c630,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763517522841787274,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1e6e7f2-2038-4edf-9d9f-e0df8b042b38,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dffcb6433b668923ccaa6b68ea5f5703d5611da2a93474d9ab5f5b10a1e485f,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763517498503962204,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf4a17c8-dce0-48f7-9926-c6899341a09b,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5b76ce050ac0b3293632738f076a766f3d6fd3b58d01c3cc1d6a0382aef09a9,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763517496942705917,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf4a17c8-dce0-48f7-9926-c6899341a09b,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e22fcb254c304c9a07d90f714674bcd58f8d9c63eb5476823fa98072f9811c23,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763517495161054677,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf4a17c8-dce0-48f7-992
6-c6899341a09b,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5475cb51623e4e67a9eae99b521cc1140854b924b0714f75c29d54c94f9f2f,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763517494172986700,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
f4a17c8-dce0-48f7-9926-c6899341a09b,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d4f15cc1a6d79122d6460fec61497605c71845d766322be74f6614251c8e13d,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763517492721999486,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf4a17c8-dce0-48f7-9926-c6899341a09b,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97db18b98e110be070ae35ee73138fd11c5ba3b17c9aaa484991c43d763e9a55,PodSandboxId:5b4521ab4fd929c1351922cc81716ffe1d45268180044fa596eb990cf29651e3,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763517491407585947
,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e677e88-f1a1-4349-ab4a-ca5fc1625de0,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4fb977d3bdbac0a79d2003f13473ec9efa02e41788b6915f23ed2b6dc8e649d,PodSandboxId:b46ab43fd47d7b056bcb9b0b4ff330c29b932c04d7aac98b3ecfc44673589365,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d87
19a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763517489905899642,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-p6b2q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf4a17c8-dce0-48f7-9926-c6899341a09b,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bec5948a3b0b6d85663bcc54a7678b1ad00c0a0cd97e6ccb2e389ca190e5b7d,PodSandboxId:459a8af32380f9c4e3915bb69cfc3749e47cb6b9cdcf7011dce68c410d10aad4,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763517487952068207,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-vfht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eed086c0-f1c8-43c4-8169-ce475fdfdd33,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291bbb19b04d1ef6f333fdd39da87aae78308f4944aacdcd99cd300c7ed8e316,PodSandboxId:7c0efe47e19fcb4a97b5cbe98e9d037d6810d61399046721cc66f827cedb0024,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519
d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763517487803557432,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b814d83-143e-4277-9129-6a9219cccb21,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c939ec33be6abea25e835db0a4c1feea78964b94ab98833ad5d89d1c2969f5f,PodSandboxId:4dd16219bd3ea2aca807ff7423efff479ca92ef70d7338bd1610f996a0c5101e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763517484972917705,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-v87lj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e442212e-6930-4bc6-8062-f14b3d34e047,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c37385089e2fd5010e4c5eed6eb2712cdc98d10f0ddfdff3c63b06d2d65a8a5,PodSandboxId:fbdc9dfa091b63143a4d3ff2c7bae60d86c5ed70095351832d43471dea29907c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Ima
ge:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763517477451146665,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-hlxxx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3d1a057b-0eec-4bb7-beaa-1697b59b68a5,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e09ec1e87aeccfdc35d357732ec75950e2f59b3b345e2c10a81275cb3fd018,PodSandboxId:6cab7defc307586598251f43f2edc3e257df822f26b7b122e33369092a2a44d8,Metadata:&Container
Metadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763517450386400140,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-nxbdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82477383-7556-4a81-a9eb-e1e97bc71ae3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806aed04c77717c07bd0dbbcbf0c24802085be59805c4afc5aa5d61c065570a6,PodSandboxId:1468403c01b0cf510196b0f96d2d25272abacbf8f95f
3845f1675d800d5c8c86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763517445914785294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af62a255-b0ad-411e-b2ac-cac4da21796a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad27302e7041367cc6999d8521c1f0fc9a761832f47f5bebb2b72835c3a338f,PodSandboxId:4eeb349dff4d7e5fb6ba73695a192f6af5752a740d2d9a3bd8fcefbf
b3d9c783,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763517435514069247,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pq5np,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6e814f-096f-4018-baab-6e6d62808c2b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ea8a3b749f7ca60e18092d90f58348f2fddd8e74f8ef03e750ee4eb5947b6f,PodSandboxId:d0c41d7971ab4302c77ee6d81c1f842b405c6eb42cd303a7db5d1723603cc48c,Metadata:&ContainerMetad
ata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763517435224289288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-nwdcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a00a2f0d-7535-4aaf-8dbf-0164b16fa453,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c20b8a8f389ee678e1ebf3800604f594655c499015319f393b3666a64dfdd0,PodSandboxId:30d3fb9e3ad1a2fda31d8f0811c7643c244dd8f098c4cb71cb91f9ecc5db457a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763517421833063054,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-218289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6bc89227a9990d7a51273102785bed2,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc2a41a0f970aa4059e28629a2cd153101214d9bf53eedec076be56afff321f,PodSandboxId:4ce9c90eeda6fcf4668b70de0669b691a9d3c807ec138b85414fc457700455c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763517421782552463,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-218289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d727631b7e92247f04647b78bf14acb0,},Annotations:map[string]string{io.kubernetes.container.hash: e
9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a65806a2e7e76d3fc500c7e2f35b2715ad570f9d2f4893ad372d68111a601d4a,PodSandboxId:0c940fab7a5ca713f08fb57a3e5f0f45784c539af6a9e0fafb100229445f9c55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763517421763146896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-218289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: af361a144fce44f41040a127542ce6bd,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abc14296f62cde52576ee82d465666b680625f61b99e1b663d31842e81befe0f,PodSandboxId:7969760c69308e9c23f52b0994b20aed06f1308f51cd1260a99649de5ed421b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763517421762217261,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-218289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebcac38048370cbd5c5cbfa0b8ec4638,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ffb8f06-a0a0-4a3f-b571-ff4fed8bfbb3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                       NAMESPACE
	727afc2dec949       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              6 minutes ago       Running             nginx                                    0                   99b84a617cd04       nginx                                     default
	67baaf793d5ef       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          7 minutes ago       Running             busybox                                  0                   3a9e9486c21fe       busybox                                   default
	6dffcb6433b66       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          7 minutes ago       Running             csi-snapshotter                          0                   b46ab43fd47d7       csi-hostpathplugin-p6b2q                  kube-system
	a5b76ce050ac0       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          7 minutes ago       Running             csi-provisioner                          0                   b46ab43fd47d7       csi-hostpathplugin-p6b2q                  kube-system
	e22fcb254c304       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   b46ab43fd47d7       csi-hostpathplugin-p6b2q                  kube-system
	fa5475cb51623       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           7 minutes ago       Running             hostpath                                 0                   b46ab43fd47d7       csi-hostpathplugin-p6b2q                  kube-system
	5d4f15cc1a6d7       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   b46ab43fd47d7       csi-hostpathplugin-p6b2q                  kube-system
	97db18b98e110       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              7 minutes ago       Running             csi-resizer                              0                   5b4521ab4fd92       csi-hostpath-resizer-0                    kube-system
	e4fb977d3bdba       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   7 minutes ago       Running             csi-external-health-monitor-controller   0                   b46ab43fd47d7       csi-hostpathplugin-p6b2q                  kube-system
	2bec5948a3b0b       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   459a8af32380f       snapshot-controller-7d9fbc56b8-vfht9      kube-system
	291bbb19b04d1       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             7 minutes ago       Running             csi-attacher                             0                   7c0efe47e19fc       csi-hostpath-attacher-0                   kube-system
	2c939ec33be6a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   4dd16219bd3ea       snapshot-controller-7d9fbc56b8-v87lj      kube-system
	2c37385089e2f       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             7 minutes ago       Running             local-path-provisioner                   0                   fbdc9dfa091b6       local-path-provisioner-648f6765c9-hlxxx   local-path-storage
	49e09ec1e87ae       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     8 minutes ago       Running             amd-gpu-device-plugin                    0                   6cab7defc3075       amd-gpu-device-plugin-nxbdq               kube-system
	806aed04c7771       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             8 minutes ago       Running             storage-provisioner                      0                   1468403c01b0c       storage-provisioner                       kube-system
	5ad27302e7041       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             8 minutes ago       Running             kube-proxy                               0                   4eeb349dff4d7       kube-proxy-pq5np                          kube-system
	62ea8a3b749f7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             8 minutes ago       Running             coredns                                  0                   d0c41d7971ab4       coredns-66bc5c9577-nwdcw                  kube-system
	38c20b8a8f389       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             8 minutes ago       Running             kube-scheduler                           0                   30d3fb9e3ad1a       kube-scheduler-addons-218289              kube-system
	bdc2a41a0f970       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             8 minutes ago       Running             etcd                                     0                   4ce9c90eeda6f       etcd-addons-218289                        kube-system
	a65806a2e7e76       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             8 minutes ago       Running             kube-apiserver                           0                   0c940fab7a5ca       kube-apiserver-addons-218289              kube-system
	abc14296f62cd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             8 minutes ago       Running             kube-controller-manager                  0                   7969760c69308       kube-controller-manager-addons-218289     kube-system
	
	
	==> coredns [62ea8a3b749f7ca60e18092d90f58348f2fddd8e74f8ef03e750ee4eb5947b6f] <==
	[INFO] 10.244.0.22:45977 - 27477 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000093977s
	[INFO] 10.244.0.22:45977 - 35826 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000108216s
	[INFO] 10.244.0.22:60540 - 64901 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000090622s
	[INFO] 10.244.0.22:45977 - 64990 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000230636s
	[INFO] 10.244.0.22:45977 - 17665 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000180622s
	[INFO] 10.244.0.22:45977 - 39429 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000346755s
	[INFO] 10.244.0.22:60540 - 57385 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000254194s
	[INFO] 10.244.0.22:60540 - 54128 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000087725s
	[INFO] 10.244.0.22:60540 - 9348 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000109367s
	[INFO] 10.244.0.22:60540 - 980 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00009741s
	[INFO] 10.244.0.22:60540 - 61514 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000190617s
	[INFO] 10.244.0.22:37619 - 46026 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000160749s
	[INFO] 10.244.0.22:58147 - 64348 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000187856s
	[INFO] 10.244.0.22:37619 - 35327 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000088454s
	[INFO] 10.244.0.22:58147 - 32239 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000128902s
	[INFO] 10.244.0.22:37619 - 55888 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000115146s
	[INFO] 10.244.0.22:58147 - 61894 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000337126s
	[INFO] 10.244.0.22:37619 - 44307 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000162493s
	[INFO] 10.244.0.22:58147 - 15353 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000164802s
	[INFO] 10.244.0.22:58147 - 2895 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000072285s
	[INFO] 10.244.0.22:37619 - 29367 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000371649s
	[INFO] 10.244.0.22:37619 - 31703 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000431532s
	[INFO] 10.244.0.22:58147 - 49452 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000346619s
	[INFO] 10.244.0.22:37619 - 28359 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000103575s
	[INFO] 10.244.0.22:58147 - 59717 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000185122s
	
	
	==> describe nodes <==
	Name:               addons-218289
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-218289
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=addons-218289
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T01_57_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-218289
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-218289"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 01:57:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-218289
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:05:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:02:34 +0000   Wed, 19 Nov 2025 01:57:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:02:34 +0000   Wed, 19 Nov 2025 01:57:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:02:34 +0000   Wed, 19 Nov 2025 01:57:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:02:34 +0000   Wed, 19 Nov 2025 01:57:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    addons-218289
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 02a5de5cfd4a4c92890309ec498225b5
	  System UUID:                02a5de5c-fd4a-4c92-8903-09ec498225b5
	  Boot ID:                    37b3b77c-b2cb-4dc8-ba80-33f498d578a5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m5s
	  default                     hello-world-app-5d498dc89-bg2fj            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	  default                     task-pv-pod-restore                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 amd-gpu-device-plugin-nxbdq                0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 coredns-66bc5c9577-nwdcw                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m31s
	  kube-system                 csi-hostpath-attacher-0                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 csi-hostpath-resizer-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m21s
	  kube-system                 csi-hostpathplugin-p6b2q                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 etcd-addons-218289                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m39s
	  kube-system                 kube-apiserver-addons-218289               250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 kube-controller-manager-addons-218289      200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 kube-proxy-pq5np                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m32s
	  kube-system                 kube-scheduler-addons-218289               100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 snapshot-controller-7d9fbc56b8-v87lj       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 snapshot-controller-7d9fbc56b8-vfht9       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m25s
	  local-path-storage          local-path-provisioner-648f6765c9-hlxxx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m27s  kube-proxy       
	  Normal  Starting                 8m37s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m37s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m37s  kubelet          Node addons-218289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m37s  kubelet          Node addons-218289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m37s  kubelet          Node addons-218289 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m36s  kubelet          Node addons-218289 status is now: NodeReady
	  Normal  RegisteredNode           8m33s  node-controller  Node addons-218289 event: Registered Node addons-218289 in Controller
	
	
	==> dmesg <==
	[  +0.000028] kauditd_printk_skb: 288 callbacks suppressed
	[  +0.000054] kauditd_printk_skb: 269 callbacks suppressed
	[  +2.078480] kauditd_printk_skb: 434 callbacks suppressed
	[ +12.821796] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.034450] kauditd_printk_skb: 59 callbacks suppressed
	[  +7.153703] kauditd_printk_skb: 59 callbacks suppressed
	[Nov19 01:58] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.955010] kauditd_printk_skb: 120 callbacks suppressed
	[  +1.167131] kauditd_printk_skb: 127 callbacks suppressed
	[  +0.000253] kauditd_printk_skb: 56 callbacks suppressed
	[  +0.000095] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.009787] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.594538] kauditd_printk_skb: 47 callbacks suppressed
	[  +9.461566] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.015315] kauditd_printk_skb: 22 callbacks suppressed
	[Nov19 01:59] kauditd_printk_skb: 38 callbacks suppressed
	[  +1.064441] kauditd_printk_skb: 156 callbacks suppressed
	[  +0.894196] kauditd_printk_skb: 152 callbacks suppressed
	[  +1.996984] kauditd_printk_skb: 195 callbacks suppressed
	[  +3.756064] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.186565] kauditd_printk_skb: 51 callbacks suppressed
	[  +6.833060] kauditd_printk_skb: 5 callbacks suppressed
	[Nov19 02:01] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.446533] kauditd_printk_skb: 46 callbacks suppressed
	[Nov19 02:02] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [bdc2a41a0f970aa4059e28629a2cd153101214d9bf53eedec076be56afff321f] <==
	{"level":"info","ts":"2025-11-19T01:59:01.394862Z","caller":"traceutil/trace.go:172","msg":"trace[158953670] transaction","detail":"{read_only:false; response_revision:1324; number_of_response:1; }","duration":"129.251311ms","start":"2025-11-19T01:59:01.265539Z","end":"2025-11-19T01:59:01.394790Z","steps":["trace[158953670] 'process raft request'  (duration: 129.120071ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T01:59:03.791169Z","caller":"traceutil/trace.go:172","msg":"trace[91129303] linearizableReadLoop","detail":"{readStateIndex:1366; appliedIndex:1366; }","duration":"352.151863ms","start":"2025-11-19T01:59:03.439000Z","end":"2025-11-19T01:59:03.791152Z","steps":["trace[91129303] 'read index received'  (duration: 352.146575ms)","trace[91129303] 'applied index is now lower than readState.Index'  (duration: 4.566µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T01:59:03.791409Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"352.309879ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T01:59:03.791455Z","caller":"traceutil/trace.go:172","msg":"trace[78679464] range","detail":"{range_begin:/registry/statefulsets; range_end:; response_count:0; response_revision:1325; }","duration":"352.453077ms","start":"2025-11-19T01:59:03.438996Z","end":"2025-11-19T01:59:03.791449Z","steps":["trace[78679464] 'agreement among raft nodes before linearized reading'  (duration: 352.280899ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T01:59:03.791483Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-19T01:59:03.438983Z","time spent":"352.489522ms","remote":"127.0.0.1:37444","response type":"/etcdserverpb.KV/Range","request count":0,"request size":26,"response count":0,"response size":28,"request content":"key:\"/registry/statefulsets\" limit:1 "}
	{"level":"warn","ts":"2025-11-19T01:59:03.793627Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.389892ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T01:59:03.794340Z","caller":"traceutil/trace.go:172","msg":"trace[469838261] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1326; }","duration":"138.774896ms","start":"2025-11-19T01:59:03.655224Z","end":"2025-11-19T01:59:03.793999Z","steps":["trace[469838261] 'agreement among raft nodes before linearized reading'  (duration: 138.037987ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T01:59:03.795442Z","caller":"traceutil/trace.go:172","msg":"trace[31424680] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1326; }","duration":"357.950009ms","start":"2025-11-19T01:59:03.437482Z","end":"2025-11-19T01:59:03.795432Z","steps":["trace[31424680] 'process raft request'  (duration: 354.279963ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T01:59:03.795270Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.326887ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T01:59:03.796028Z","caller":"traceutil/trace.go:172","msg":"trace[666877106] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1326; }","duration":"108.085722ms","start":"2025-11-19T01:59:03.687933Z","end":"2025-11-19T01:59:03.796019Z","steps":["trace[666877106] 'agreement among raft nodes before linearized reading'  (duration: 107.308118ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T01:59:03.797652Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.118884ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T01:59:03.799529Z","caller":"traceutil/trace.go:172","msg":"trace[1192456674] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1326; }","duration":"141.990195ms","start":"2025-11-19T01:59:03.657522Z","end":"2025-11-19T01:59:03.799512Z","steps":["trace[1192456674] 'agreement among raft nodes before linearized reading'  (duration: 140.101362ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T01:59:03.798297Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-19T01:59:03.437466Z","time spent":"359.802678ms","remote":"127.0.0.1:50984","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":71,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" mod_revision:991 > success:<request_delete_range:<key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" > > failure:<request_range:<key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" > >"}
	{"level":"info","ts":"2025-11-19T01:59:07.451254Z","caller":"traceutil/trace.go:172","msg":"trace[1730160220] transaction","detail":"{read_only:false; response_revision:1383; number_of_response:1; }","duration":"138.277529ms","start":"2025-11-19T01:59:07.312964Z","end":"2025-11-19T01:59:07.451242Z","steps":["trace[1730160220] 'process raft request'  (duration: 138.188655ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T01:59:12.582368Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.463066ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/cloud-spanner-emulator-6f9fcf858b-qphhj\" limit:1 ","response":"range_response_count:1 size:3723"}
	{"level":"info","ts":"2025-11-19T01:59:12.582458Z","caller":"traceutil/trace.go:172","msg":"trace[1941703685] range","detail":"{range_begin:/registry/pods/default/cloud-spanner-emulator-6f9fcf858b-qphhj; range_end:; response_count:1; response_revision:1447; }","duration":"158.560028ms","start":"2025-11-19T01:59:12.423885Z","end":"2025-11-19T01:59:12.582445Z","steps":["trace[1941703685] 'range keys from in-memory index tree'  (duration: 158.039166ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T01:59:12.582756Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.328401ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6657053184690397853 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1408 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-19T01:59:12.583059Z","caller":"traceutil/trace.go:172","msg":"trace[1269676517] transaction","detail":"{read_only:false; response_revision:1448; number_of_response:1; }","duration":"156.923365ms","start":"2025-11-19T01:59:12.426126Z","end":"2025-11-19T01:59:12.583050Z","steps":["trace[1269676517] 'process raft request'  (duration: 48.038433ms)","trace[1269676517] 'compare'  (duration: 107.779077ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T01:59:33.076245Z","caller":"traceutil/trace.go:172","msg":"trace[5652168] linearizableReadLoop","detail":"{readStateIndex:1684; appliedIndex:1684; }","duration":"181.855656ms","start":"2025-11-19T01:59:32.894372Z","end":"2025-11-19T01:59:33.076228Z","steps":["trace[5652168] 'read index received'  (duration: 181.850353ms)","trace[5652168] 'applied index is now lower than readState.Index'  (duration: 4.537µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T01:59:33.076403Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.015237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T01:59:33.076426Z","caller":"traceutil/trace.go:172","msg":"trace[1870692752] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1630; }","duration":"182.051873ms","start":"2025-11-19T01:59:32.894369Z","end":"2025-11-19T01:59:33.076421Z","steps":["trace[1870692752] 'agreement among raft nodes before linearized reading'  (duration: 181.987551ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T01:59:33.077185Z","caller":"traceutil/trace.go:172","msg":"trace[1100380230] transaction","detail":"{read_only:false; response_revision:1631; number_of_response:1; }","duration":"267.593625ms","start":"2025-11-19T01:59:32.809583Z","end":"2025-11-19T01:59:33.077176Z","steps":["trace[1100380230] 'process raft request'  (duration: 266.940132ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T01:59:33.077171Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.098672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T01:59:33.077910Z","caller":"traceutil/trace.go:172","msg":"trace[891708339] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1631; }","duration":"124.843127ms","start":"2025-11-19T01:59:32.953058Z","end":"2025-11-19T01:59:33.077901Z","steps":["trace[891708339] 'agreement among raft nodes before linearized reading'  (duration: 124.0826ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:00:05.187751Z","caller":"traceutil/trace.go:172","msg":"trace[455551383] transaction","detail":"{read_only:false; response_revision:1723; number_of_response:1; }","duration":"179.681304ms","start":"2025-11-19T02:00:05.008044Z","end":"2025-11-19T02:00:05.187725Z","steps":["trace[455551383] 'process raft request'  (duration: 179.540452ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:05:44 up 9 min,  0 users,  load average: 1.02, 1.30, 0.87
	Linux addons-218289 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [a65806a2e7e76d3fc500c7e2f35b2715ad570f9d2f4893ad372d68111a601d4a] <==
	W1119 01:57:23.467425       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 01:57:23.487722       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1119 01:57:24.720660       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.104.177.134"}
	W1119 01:57:41.848614       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 01:57:41.866279       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 01:57:41.879252       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 01:57:41.888671       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 01:57:59.166439       1 handler_proxy.go:99] no RequestInfo found in the context
	E1119 01:57:59.168054       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1119 01:57:59.167916       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.160.206:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.160.206:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.160.206:443: connect: connection refused" logger="UnhandledError"
	E1119 01:57:59.183089       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.160.206:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.160.206:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.160.206:443: connect: connection refused" logger="UnhandledError"
	E1119 01:57:59.184446       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.160.206:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.160.206:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.160.206:443: connect: connection refused" logger="UnhandledError"
	E1119 01:57:59.195204       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.160.206:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.160.206:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.160.206:443: connect: connection refused" logger="UnhandledError"
	I1119 01:57:59.387668       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1119 01:58:49.125208       1 conn.go:339] Error on socket receive: read tcp 192.168.39.195:8443->192.168.39.1:51188: use of closed network connection
	E1119 01:58:49.323634       1 conn.go:339] Error on socket receive: read tcp 192.168.39.195:8443->192.168.39.1:51220: use of closed network connection
	I1119 01:58:58.568941       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.112.228"}
	I1119 01:59:16.031150       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1119 01:59:16.232497       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.175.4"}
	I1119 01:59:39.962654       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1119 02:00:00.196016       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1119 02:01:39.618000       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.222.165"}
	
	
	==> kube-controller-manager [abc14296f62cde52576ee82d465666b680625f61b99e1b663d31842e81befe0f] <==
	I1119 01:57:11.843711       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 01:57:11.847748       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 01:57:11.852188       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 01:57:11.866542       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1119 01:57:11.868671       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 01:57:11.870207       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 01:57:11.870264       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 01:57:11.870290       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 01:57:11.870532       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 01:57:11.871221       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 01:57:11.872261       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 01:57:11.873750       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	E1119 01:57:19.759943       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1119 01:57:41.841703       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1119 01:57:41.842078       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1119 01:57:41.842237       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1119 01:57:41.856008       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1119 01:57:41.861054       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1119 01:57:42.942998       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 01:57:42.962295       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 01:59:02.596373       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1119 01:59:22.878695       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I1119 01:59:31.072042       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I1119 01:59:34.782952       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I1119 02:01:54.542422       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	
	
	==> kube-proxy [5ad27302e7041367cc6999d8521c1f0fc9a761832f47f5bebb2b72835c3a338f] <==
	I1119 01:57:16.445116       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 01:57:16.548766       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 01:57:16.556785       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.195"]
	E1119 01:57:16.557975       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 01:57:16.855894       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1119 01:57:16.857987       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1119 01:57:16.858051       1 server_linux.go:132] "Using iptables Proxier"
	I1119 01:57:16.873793       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 01:57:16.874373       1 server.go:527] "Version info" version="v1.34.1"
	I1119 01:57:16.874406       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 01:57:16.886151       1 config.go:200] "Starting service config controller"
	I1119 01:57:16.895435       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 01:57:16.893385       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 01:57:16.914718       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 01:57:16.931436       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 01:57:16.895126       1 config.go:309] "Starting node config controller"
	I1119 01:57:16.958369       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 01:57:16.959073       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 01:57:16.893366       1 config.go:106] "Starting endpoint slice config controller"
	I1119 01:57:16.962420       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 01:57:16.962433       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 01:57:17.014795       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [38c20b8a8f389ee678e1ebf3800604f594655c499015319f393b3666a64dfdd0] <==
	E1119 01:57:05.227592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 01:57:05.227725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 01:57:05.228510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 01:57:05.228630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 01:57:05.228903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 01:57:05.229018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 01:57:05.229401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 01:57:05.229612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 01:57:05.229910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 01:57:05.230077       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 01:57:05.230198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 01:57:05.230254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 01:57:05.230306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 01:57:05.230359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 01:57:05.230471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 01:57:05.230875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 01:57:06.062100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 01:57:06.081132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 01:57:06.123227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 01:57:06.157800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 01:57:06.202455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 01:57:06.249222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 01:57:06.278403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 01:57:06.291063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1119 01:57:08.514792       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:04:37 addons-218289 kubelet[1506]: E1119 02:04:37.964189    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763517877963121106  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 02:04:47 addons-218289 kubelet[1506]: E1119 02:04:47.967367    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763517887966891186  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 02:04:47 addons-218289 kubelet[1506]: E1119 02:04:47.967389    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763517887966891186  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 02:04:48 addons-218289 kubelet[1506]: E1119 02:04:48.668592    1506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-bg2fj" podUID="8c11ad2a-3450-4138-8328-c887b64fc6de"
	Nov 19 02:04:57 addons-218289 kubelet[1506]: E1119 02:04:57.974321    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763517897973288449  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 02:04:57 addons-218289 kubelet[1506]: E1119 02:04:57.974353    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763517897973288449  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 02:04:59 addons-218289 kubelet[1506]: E1119 02:04:59.671230    1506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-bg2fj" podUID="8c11ad2a-3450-4138-8328-c887b64fc6de"
	Nov 19 02:05:02 addons-218289 kubelet[1506]: I1119 02:05:02.666613    1506 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 02:05:05 addons-218289 kubelet[1506]: E1119 02:05:05.893392    1506 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 19 02:05:05 addons-218289 kubelet[1506]: E1119 02:05:05.893477    1506 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 19 02:05:05 addons-218289 kubelet[1506]: E1119 02:05:05.893567    1506 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod-restore_default(720e1e0b-ad86-4787-aea0-8bbccbd2857f): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 19 02:05:05 addons-218289 kubelet[1506]: E1119 02:05:05.893600    1506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod-restore" podUID="720e1e0b-ad86-4787-aea0-8bbccbd2857f"
	Nov 19 02:05:07 addons-218289 kubelet[1506]: E1119 02:05:07.977056    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763517907976555516  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 02:05:07 addons-218289 kubelet[1506]: E1119 02:05:07.977081    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763517907976555516  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 02:05:12 addons-218289 kubelet[1506]: E1119 02:05:12.668611    1506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-bg2fj" podUID="8c11ad2a-3450-4138-8328-c887b64fc6de"
	Nov 19 02:05:17 addons-218289 kubelet[1506]: E1119 02:05:17.979892    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763517917979441350  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 02:05:17 addons-218289 kubelet[1506]: E1119 02:05:17.979932    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763517917979441350  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 02:05:19 addons-218289 kubelet[1506]: E1119 02:05:19.667497    1506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod-restore" podUID="720e1e0b-ad86-4787-aea0-8bbccbd2857f"
	Nov 19 02:05:20 addons-218289 kubelet[1506]: I1119 02:05:20.667547    1506 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-nxbdq" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 02:05:25 addons-218289 kubelet[1506]: I1119 02:05:25.667362    1506 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-nwdcw" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 02:05:27 addons-218289 kubelet[1506]: E1119 02:05:27.982070    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763517927981582262  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 02:05:27 addons-218289 kubelet[1506]: E1119 02:05:27.982113    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763517927981582262  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 02:05:34 addons-218289 kubelet[1506]: E1119 02:05:34.667467    1506 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod-restore" podUID="720e1e0b-ad86-4787-aea0-8bbccbd2857f"
	Nov 19 02:05:37 addons-218289 kubelet[1506]: E1119 02:05:37.987588    1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763517937986747994  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 02:05:37 addons-218289 kubelet[1506]: E1119 02:05:37.987643    1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763517937986747994  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	
	
	==> storage-provisioner [806aed04c77717c07bd0dbbcbf0c24802085be59805c4afc5aa5d61c065570a6] <==
	W1119 02:05:18.991353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:20.994985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:21.001121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:23.004695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:23.013345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:25.016919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:25.022162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:27.025997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:27.031910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:29.036062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:29.044398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:31.049008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:31.054602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:33.059198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:33.065161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:35.069608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:35.079201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:37.082919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:37.090684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:39.093787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:39.099725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:41.103512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:41.109337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:43.112435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:05:43.117734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-218289 -n addons-218289
helpers_test.go:269: (dbg) Run:  kubectl --context addons-218289 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-bg2fj task-pv-pod-restore
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-218289 describe pod hello-world-app-5d498dc89-bg2fj task-pv-pod-restore
helpers_test.go:290: (dbg) kubectl --context addons-218289 describe pod hello-world-app-5d498dc89-bg2fj task-pv-pod-restore:

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-bg2fj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-218289/192.168.39.195
	Start Time:       Wed, 19 Nov 2025 02:01:39 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.33
	IPs:
	  IP:           10.244.0.33
	Controlled By:  ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s5tss (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s5tss:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m6s                 default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-bg2fj to addons-218289
	  Warning  Failed     70s (x3 over 3m22s)  kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     70s (x3 over 3m22s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    33s (x5 over 3m22s)  kubelet            Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     33s (x5 over 3m22s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    18s (x4 over 4m5s)   kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             task-pv-pod-restore
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-218289/192.168.39.195
	Start Time:       Wed, 19 Nov 2025 01:59:43 +0000
	Labels:           app=task-pv-pod-restore
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rgbc4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc-restore
	    ReadOnly:   false
	  kube-api-access-rgbc4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m2s                  default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-218289
	  Normal   Pulling    72s (x5 over 6m2s)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     40s (x5 over 5m32s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     40s (x5 over 5m32s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    11s (x12 over 5m31s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     11s (x12 over 5m31s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-218289 addons disable volumesnapshots --alsologtostderr -v=1
panic: test timed out after 2h0m0s
	running tests:
		TestAddons (1h59m47s)
		TestAddons/parallel/CSI (1h57m2s)

                                                
                                                
goroutine 898 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2484 +0x394
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 119 minutes]:
testing.(*T).Run(0xc00052ae00, {0x32011df?, 0xc000b15a78?}, 0x3c234f8)
	/usr/local/go/src/testing/testing.go:1859 +0x431
testing.runTests.func1(0xc00052ae00)
	/usr/local/go/src/testing/testing.go:2279 +0x37
testing.tRunner(0xc00052ae00, 0xc000b15bb8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
testing.runTests(0xc000802090, {0x5c342e0, 0x2c, 0x2c}, {0xffffffffffffffff?, 0xc0008ce340?, 0x5c5ca00?})
	/usr/local/go/src/testing/testing.go:2277 +0x4b4
testing.(*M).Run(0xc0006792c0)
	/usr/local/go/src/testing/testing.go:2142 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc0006792c0)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0x105
main.main()
	_testmain.go:133 +0xa8

                                                
                                                
goroutine 100 [chan receive, 119 minutes]:
testing.(*T).Parallel(0xc000b48c40)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000b48c40)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestOffline(0xc000b48c40)
	/home/jenkins/workspace/Build_Cross/test/integration/aab_offline_test.go:32 +0x39
testing.tRunner(0xc000b48c40, 0x3c23610)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 131 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3f92dc0, {{0x3f87de8, 0xc0002483c0?}, 0xc000b2c2a0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 116
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 121 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3f806d0, 0xc0000844d0}, 0xc000b4cf50, 0xc0000c8f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3f806d0, 0xc0000844d0}, 0x0?, 0xc000b4cf50, 0xc000b4cf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3f806d0?, 0xc0000844d0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 132
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 862 [IO wait, 110 minutes]:
internal/poll.runtime_pollWait(0x765b3c946ee0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001b08060?, 0xc000254600?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001b08060, {0xc000254600, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b38028, {0xc000254600?, 0x41835f?, 0x2c473e0?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc0007de150, {0x3f35b20, 0xc000514138})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f35ca0, 0xc0007de150}, {0x3f35b20, 0xc000514138}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000b38028?, {0x3f35ca0, 0xc0007de150})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000b38028, {0x3f35ca0, 0xc0007de150})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f35ca0, 0xc0007de150}, {0x3f35ba0, 0xc000b38028}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc001a44690?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 188
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 181 [chan receive, 114 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1753 +0x49b
testing.tRunner(0xc000102fc0, 0xc000b79170)
	/usr/local/go/src/testing/testing.go:1798 +0x12d
created by testing.(*T).Run in goroutine 101
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 132 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc0014a6840, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 116
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 101 [chan receive, 117 minutes]:
testing.(*T).Run(0xc000b48e00, {0x31fcb90?, 0x22ecb25c000?}, 0xc000b79170)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestAddons(0xc000b48e00)
	/home/jenkins/workspace/Build_Cross/test/integration/addons_test.go:141 +0x2f4
testing.tRunner(0xc000b48e00, 0x3c234f8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 120 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000125b10, 0x17)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc00008fce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3f96240)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0014a6840)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x512aec8?, 0x5a97220?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x3f806d0?, 0xc0000844d0?}, 0x41b1b4?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x3f806d0, 0xc0000844d0}, 0xc00155bf50, {0x3f37720, 0xc000b78150}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b488c0?, {0x3f37720?, 0xc000b78150?}, 0xc0?, 0x55d160?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000b56270, 0x3b9aca00, 0x0, 0x1, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 132
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 122 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 121
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 113 [select, 117 minutes]:
net/http.(*persistConn).readLoop(0xc0016a2000)
	/usr/local/go/src/net/http/transport.go:2395 +0xc5f
created by net/http.(*Transport).dialConn in goroutine 123
	/usr/local/go/src/net/http/transport.go:1944 +0x174f

                                                
                                                
goroutine 178 [select, 117 minutes]:
net/http.(*persistConn).writeLoop(0xc0016a2000)
	/usr/local/go/src/net/http/transport.go:2590 +0xe7
created by net/http.(*Transport).dialConn in goroutine 123
	/usr/local/go/src/net/http/transport.go:1945 +0x17a5

                                                
                                                
goroutine 128 [select, 117 minutes]:
net/http.(*persistConn).writeLoop(0xc0014b79e0)
	/usr/local/go/src/net/http/transport.go:2590 +0xe7
created by net/http.(*Transport).dialConn in goroutine 196
	/usr/local/go/src/net/http/transport.go:1945 +0x17a5

                                                
                                                
goroutine 127 [select, 117 minutes]:
net/http.(*persistConn).readLoop(0xc0014b79e0)
	/usr/local/go/src/net/http/transport.go:2395 +0xc5f
created by net/http.(*Transport).dialConn in goroutine 196
	/usr/local/go/src/net/http/transport.go:1944 +0x174f

                                                
                                                
goroutine 188 [syscall, 110 minutes]:
syscall.Syscall6(0xf7, 0x3, 0xf, 0xc0013f56f0, 0x4, 0xc00150e1b0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
internal/syscall/unix.Waitid(0xc0013f571e?, 0xc0013f5848?, 0x5930ab?, 0x7ffe384361d8?, 0x1?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x39
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:106
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:251
os.(*Process).pidfdWait(0xc001b2a018?)
	/usr/local/go/src/os/pidfd_linux.go:105 +0x209
os.(*Process).wait(0xc0000bf008?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc0014c8180)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc0014c8180)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc00154c1c0, 0xc0014c8180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.disableAddon(0xc00154c1c0, {0x320d3e4, 0xf}, {0xc00050e4f0?, 0xc00154c380?})
	/home/jenkins/workspace/Build_Cross/test/integration/addons_test.go:1053 +0x12a
runtime.Goexit()
	/usr/local/go/src/runtime/panic.go:636 +0x5e
testing.(*common).FailNow(0xc00154c1c0)
	/usr/local/go/src/testing/testing.go:1041 +0x4a
testing.(*common).Fatalf(0xc00154c1c0, {0x3274ac3?, 0xc00035c000?}, {0xc00008fd90?, 0xc00050e4f0?, 0xd?})
	/usr/local/go/src/testing/testing.go:1125 +0x5e
k8s.io/minikube/test/integration.validateCSIDriverAndSnapshots({0x3f80350, 0xc00035c000}, 0xc00154c1c0, {0xc00050e4f0, 0xd})
	/home/jenkins/workspace/Build_Cross/test/integration/addons_test.go:610 +0x1645
k8s.io/minikube/test/integration.TestAddons.func4.1(0xc00154c1c0)
	/home/jenkins/workspace/Build_Cross/test/integration/addons_test.go:165 +0x6c
testing.tRunner(0xc00154c1c0, 0xc0005772c0)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 181
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 863 [IO wait, 110 minutes]:
internal/poll.runtime_pollWait(0x765b3c946508, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001b08180?, 0xc001863c36?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001b08180, {0xc001863c36, 0x123ca, 0x123ca})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b38040, {0xc001863c36?, 0x41835f?, 0x2c473e0?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc0007de210, {0x3f35b20, 0xc0005141c0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f35ca0, 0xc0007de210}, {0x3f35b20, 0xc0005141c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000b38040?, {0x3f35ca0, 0xc0007de210})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000b38040, {0x3f35ca0, 0xc0007de210})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f35ca0, 0xc0007de210}, {0x3f35ba0, 0xc000b38040}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc00167f810?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 188
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                    

Test pass (11/19)

x
+
TestDownloadOnly/v1.28.0/json-events (7.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-398998 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-398998 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.103188535s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1119 01:56:20.470069  305349 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1119 01:56:20.470176  305349 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-301472/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-398998
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-398998: exit status 85 (78.653885ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-398998 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-398998 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 01:56:13
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 01:56:13.422165  305361 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:56:13.422482  305361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:56:13.422494  305361 out.go:374] Setting ErrFile to fd 2...
	I1119 01:56:13.422499  305361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:56:13.422720  305361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-301472/.minikube/bin
	W1119 01:56:13.422850  305361 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21924-301472/.minikube/config/config.json: open /home/jenkins/minikube-integration/21924-301472/.minikube/config/config.json: no such file or directory
	I1119 01:56:13.423359  305361 out.go:368] Setting JSON to true
	I1119 01:56:13.424425  305361 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":34624,"bootTime":1763482749,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 01:56:13.424526  305361 start.go:143] virtualization: kvm guest
	I1119 01:56:13.426694  305361 out.go:99] [download-only-398998] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1119 01:56:13.426862  305361 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21924-301472/.minikube/cache/preloaded-tarball: no such file or directory
	I1119 01:56:13.426924  305361 notify.go:221] Checking for updates...
	I1119 01:56:13.428174  305361 out.go:171] MINIKUBE_LOCATION=21924
	I1119 01:56:13.429640  305361 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 01:56:13.430955  305361 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21924-301472/kubeconfig
	I1119 01:56:13.432542  305361 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-301472/.minikube
	I1119 01:56:13.433975  305361 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1119 01:56:13.436537  305361 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1119 01:56:13.436896  305361 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 01:56:13.473355  305361 out.go:99] Using the kvm2 driver based on user configuration
	I1119 01:56:13.473390  305361 start.go:309] selected driver: kvm2
	I1119 01:56:13.473397  305361 start.go:930] validating driver "kvm2" against <nil>
	I1119 01:56:13.473721  305361 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 01:56:13.474234  305361 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1119 01:56:13.474397  305361 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1119 01:56:13.474429  305361 cni.go:84] Creating CNI manager for ""
	I1119 01:56:13.474478  305361 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1119 01:56:13.474488  305361 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1119 01:56:13.474527  305361 start.go:353] cluster config:
	{Name:download-only-398998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-398998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 01:56:13.474688  305361 iso.go:125] acquiring lock: {Name:mkd04a343eda8a14ae76b35bb2e328c425b1a958 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 01:56:13.476268  305361 out.go:99] Downloading VM boot image ...
	I1119 01:56:13.476326  305361 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21924-301472/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1119 01:56:16.684025  305361 out.go:99] Starting "download-only-398998" primary control-plane node in "download-only-398998" cluster
	I1119 01:56:16.684083  305361 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1119 01:56:16.700567  305361 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1119 01:56:16.700616  305361 cache.go:65] Caching tarball of preloaded images
	I1119 01:56:16.700813  305361 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1119 01:56:16.702810  305361 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1119 01:56:16.702845  305361 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1119 01:56:16.727464  305361 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1119 01:56:16.727615  305361 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21924-301472/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-398998 host does not exist
	  To start a cluster, run: "minikube start -p download-only-398998"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-398998
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-571460 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-571460 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.131301673s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1119 01:56:24.991313  305349 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1119 01:56:24.991353  305349 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-301472/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-571460
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-571460: exit status 85 (74.779691ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-398998 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-398998 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │ 19 Nov 25 01:56 UTC │
	│ delete  │ -p download-only-398998                                                                                                                                                 │ download-only-398998 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │ 19 Nov 25 01:56 UTC │
	│ start   │ -o=json --download-only -p download-only-571460 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-571460 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 01:56:20
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 01:56:20.912742  305541 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:56:20.912997  305541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:56:20.913006  305541 out.go:374] Setting ErrFile to fd 2...
	I1119 01:56:20.913010  305541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:56:20.913176  305541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-301472/.minikube/bin
	I1119 01:56:20.913650  305541 out.go:368] Setting JSON to true
	I1119 01:56:20.914506  305541 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":34632,"bootTime":1763482749,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 01:56:20.914603  305541 start.go:143] virtualization: kvm guest
	I1119 01:56:20.916557  305541 out.go:99] [download-only-571460] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 01:56:20.916692  305541 notify.go:221] Checking for updates...
	I1119 01:56:20.917994  305541 out.go:171] MINIKUBE_LOCATION=21924
	I1119 01:56:20.919540  305541 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 01:56:20.920874  305541 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21924-301472/kubeconfig
	I1119 01:56:20.923033  305541 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-301472/.minikube
	I1119 01:56:20.924540  305541 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-571460 host does not exist
	  To start a cluster, run: "minikube start -p download-only-571460"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-571460
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1119 01:56:25.687988  305349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-171694 --alsologtostderr --binary-mirror http://127.0.0.1:33811 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-171694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-171694
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    

Test skip (7/19)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
Copied to clipboard